1602656211
Hi All,
In this tutorial we will go over the demonstration of laravel livewire load more data example. i would like to share with you laravel livewire onscroll load more data. you will learn auto load more data on page scroll laravel livewire. let’s discuss about laravel 8 livewire load more example. you will do the following things for load more data on page scroll in laravel livewire.
In this tutorial, we will create simple load more data on scroll example using laravel livewire. you can use laravel livewire load more with laravel 6, laravel 7 and laravel 8 version.
Here, i will give you very simple example to creating load more data on window scroll event example with laravel. we will use only livewire/livewire package.
So, let’s follow bellow step and you will get bellow layout:
Step 1: Install Laravel 8
first of all we need to get fresh Laravel 8 version application using bellow command, So open your terminal OR command prompt and run bellow command:
composer create-project --prefer-dist laravel/laravel blog
Step 2: Create Dummy Records using Tinker Factory
you need to run following command to create dummy records in your users table. let’s run both command:
php artisan tinker
User::factory()->count(100)->create()
Step 3: Install Livewire
now in this step, we will simply install livewire to our laravel 8 application using bellow command:
composer require livewire/livewire
Step 4: Create Component
Now here we will create livewire component using their command. so run bellow command to create load more component.
php artisan make:livewire load-more-user
Now they created fies on both path:
app/Http/Livewire/LoadMoreUser.php
resources/views/livewire/load-more-user.blade.php
Now both file we will update as bellow for our contact us form.
app/Http/Livewire/LoadMoreUser.php
<?php
namespace App\Http\Livewire;
use Livewire\Component;
use App\Models\User;
class LoadMoreUser extends Component
{
public $perPage = 15;
protected $listeners = [
'load-more' => 'loadMore'
];
/**
* Write code on Method
*
* @return response()
*/
public function loadMore()
{
$this->perPage = $this->perPage + 5;
}
/**
* Write code on Method
*
* @return response()
*/
public function render()
{
$users = User::latest()->paginate($this->perPage);
$this->emit('userStore');
return view('livewire.load-more-user', ['users' => $users]);
}
}
resources/views/livewire/load-more-user.blade.php
<div>
<div class="table-responsive">
<table class="table table-bordered">
<thead>
<tr>
<th>No.</th>
<th>Name</th>
<th>Email</th>
</tr>
</thead>
<tbody>
@foreach($users as $user)
<tr>
<td>{{ $user->id }}</td>
<td>{{ $user->name }}</td>
<td>{{ $user->email }}</td>
</tr>
@endforeach
</tbody>
</table>
</div>
</div>
#laravel #php #web-development #programming #developer
1655630160
Install via pip:
$ pip install pytumblr
Install from source:
$ git clone https://github.com/tumblr/pytumblr.git
$ cd pytumblr
$ python setup.py install
A pytumblr.TumblrRestClient
is the object you'll make all of your calls to the Tumblr API through. Creating one is this easy:
client = pytumblr.TumblrRestClient(
'<consumer_key>',
'<consumer_secret>',
'<oauth_token>',
'<oauth_secret>',
)
client.info() # Grabs the current user information
Two easy ways to get your credentials to are:
interactive_console.py
tool (if you already have a consumer key & secret)client.info() # get information about the authenticating user
client.dashboard() # get the dashboard for the authenticating user
client.likes() # get the likes for the authenticating user
client.following() # get the blogs followed by the authenticating user
client.follow('codingjester.tumblr.com') # follow a blog
client.unfollow('codingjester.tumblr.com') # unfollow a blog
client.like(id, reblogkey) # like a post
client.unlike(id, reblogkey) # unlike a post
client.blog_info(blogName) # get information about a blog
client.posts(blogName, **params) # get posts for a blog
client.avatar(blogName) # get the avatar for a blog
client.blog_likes(blogName) # get the likes on a blog
client.followers(blogName) # get the followers of a blog
client.blog_following(blogName) # get the publicly exposed blogs that [blogName] follows
client.queue(blogName) # get the queue for a given blog
client.submission(blogName) # get the submissions for a given blog
Creating posts
PyTumblr lets you create all of the various types that Tumblr supports. When using these types there are a few defaults that are able to be used with any post type.
The default supported types are described below.
We'll show examples throughout of these default examples while showcasing all the specific post types.
Creating a photo post
Creating a photo post supports a bunch of different options plus the described default options * caption - a string, the user supplied caption * link - a string, the "click-through" url for the photo * source - a string, the url for the photo you want to use (use this or the data parameter) * data - a list or string, a list of filepaths or a single file path for multipart file upload
#Creates a photo post using a source URL
client.create_photo(blogName, state="published", tags=["testing", "ok"],
source="https://68.media.tumblr.com/b965fbb2e501610a29d80ffb6fb3e1ad/tumblr_n55vdeTse11rn1906o1_500.jpg")
#Creates a photo post using a local filepath
client.create_photo(blogName, state="queue", tags=["testing", "ok"],
tweet="Woah this is an incredible sweet post [URL]",
data="/Users/johnb/path/to/my/image.jpg")
#Creates a photoset post using several local filepaths
client.create_photo(blogName, state="draft", tags=["jb is cool"], format="markdown",
data=["/Users/johnb/path/to/my/image.jpg", "/Users/johnb/Pictures/kittens.jpg"],
caption="## Mega sweet kittens")
Creating a text post
Creating a text post supports the same options as default and just a two other parameters * title - a string, the optional title for the post. Supports markdown or html * body - a string, the body of the of the post. Supports markdown or html
#Creating a text post
client.create_text(blogName, state="published", slug="testing-text-posts", title="Testing", body="testing1 2 3 4")
Creating a quote post
Creating a quote post supports the same options as default and two other parameter * quote - a string, the full text of the qote. Supports markdown or html * source - a string, the cited source. HTML supported
#Creating a quote post
client.create_quote(blogName, state="queue", quote="I am the Walrus", source="Ringo")
Creating a link post
#Create a link post
client.create_link(blogName, title="I like to search things, you should too.", url="https://duckduckgo.com",
description="Search is pretty cool when a duck does it.")
Creating a chat post
Creating a chat post supports the same options as default and two other parameters * title - a string, the title of the chat post * conversation - a string, the text of the conversation/chat, with diablog labels (no html)
#Create a chat post
chat = """John: Testing can be fun!
Renee: Testing is tedious and so are you.
John: Aw.
"""
client.create_chat(blogName, title="Renee just doesn't understand.", conversation=chat, tags=["renee", "testing"])
Creating an audio post
Creating an audio post allows for all default options and a has 3 other parameters. The only thing to keep in mind while dealing with audio posts is to make sure that you use the external_url parameter or data. You cannot use both at the same time. * caption - a string, the caption for your post * external_url - a string, the url of the site that hosts the audio file * data - a string, the filepath of the audio file you want to upload to Tumblr
#Creating an audio file
client.create_audio(blogName, caption="Rock out.", data="/Users/johnb/Music/my/new/sweet/album.mp3")
#lets use soundcloud!
client.create_audio(blogName, caption="Mega rock out.", external_url="https://soundcloud.com/skrillex/sets/recess")
Creating a video post
Creating a video post allows for all default options and has three other options. Like the other post types, it has some restrictions. You cannot use the embed and data parameters at the same time. * caption - a string, the caption for your post * embed - a string, the HTML embed code for the video * data - a string, the path of the file you want to upload
#Creating an upload from YouTube
client.create_video(blogName, caption="Jon Snow. Mega ridiculous sword.",
embed="http://www.youtube.com/watch?v=40pUYLacrj4")
#Creating a video post from local file
client.create_video(blogName, caption="testing", data="/Users/johnb/testing/ok/blah.mov")
Editing a post
Updating a post requires you knowing what type a post you're updating. You'll be able to supply to the post any of the options given above for updates.
client.edit_post(blogName, id=post_id, type="text", title="Updated")
client.edit_post(blogName, id=post_id, type="photo", data="/Users/johnb/mega/awesome.jpg")
Reblogging a Post
Reblogging a post just requires knowing the post id and the reblog key, which is supplied in the JSON of any post object.
client.reblog(blogName, id=125356, reblog_key="reblog_key")
Deleting a post
Deleting just requires that you own the post and have the post id
client.delete_post(blogName, 123456) # Deletes your post :(
A note on tags: When passing tags, as params, please pass them as a list (not a comma-separated string):
client.create_text(blogName, tags=['hello', 'world'], ...)
Getting notes for a post
In order to get the notes for a post, you need to have the post id and the blog that it is on.
data = client.notes(blogName, id='123456')
The results include a timestamp you can use to make future calls.
data = client.notes(blogName, id='123456', before_timestamp=data["_links"]["next"]["query_params"]["before_timestamp"])
# get posts with a given tag
client.tagged(tag, **params)
This client comes with a nice interactive console to run you through the OAuth process, grab your tokens (and store them for future use).
You'll need pyyaml
installed to run it, but then it's just:
$ python interactive-console.py
and away you go! Tokens are stored in ~/.tumblr
and are also shared by other Tumblr API clients like the Ruby client.
The tests (and coverage reports) are run with nose, like this:
python setup.py test
Author: tumblr
Source Code: https://github.com/tumblr/pytumblr
License: Apache-2.0 license
1647064260
Run C# scripts from the .NET CLI, define NuGet packages inline and edit/debug them in VS Code - all of that with full language services support from OmniSharp.
Name | Version | Framework(s) |
---|---|---|
dotnet-script (global tool) | net6.0 , net5.0 , netcoreapp3.1 | |
Dotnet.Script (CLI as Nuget) | net6.0 , net5.0 , netcoreapp3.1 | |
Dotnet.Script.Core | netcoreapp3.1 , netstandard2.0 | |
Dotnet.Script.DependencyModel | netstandard2.0 | |
Dotnet.Script.DependencyModel.Nuget | netstandard2.0 |
The only thing we need to install is .NET Core 3.1 or .NET 5.0 SDK.
.NET Core 2.1 introduced the concept of global tools meaning that you can install dotnet-script
using nothing but the .NET CLI.
dotnet tool install -g dotnet-script
You can invoke the tool using the following command: dotnet-script
Tool 'dotnet-script' (version '0.22.0') was successfully installed.
The advantage of this approach is that you can use the same command for installation across all platforms. .NET Core SDK also supports viewing a list of installed tools and their uninstallation.
dotnet tool list -g
Package Id Version Commands
---------------------------------------------
dotnet-script 0.22.0 dotnet-script
dotnet tool uninstall dotnet-script -g
Tool 'dotnet-script' (version '0.22.0') was successfully uninstalled.
choco install dotnet.script
We also provide a PowerShell script for installation.
(new-object Net.WebClient).DownloadString("https://raw.githubusercontent.com/filipw/dotnet-script/master/install/install.ps1") | iex
curl -s https://raw.githubusercontent.com/filipw/dotnet-script/master/install/install.sh | bash
If permission is denied we can try with sudo
curl -s https://raw.githubusercontent.com/filipw/dotnet-script/master/install/install.sh | sudo bash
A Dockerfile for running dotnet-script in a Linux container is available. Build:
cd build
docker build -t dotnet-script -f Dockerfile ..
And run:
docker run -it dotnet-script --version
You can manually download all the releases in zip
format from the GitHub releases page.
Our typical helloworld.csx
might look like this:
Console.WriteLine("Hello world!");
That is all it takes and we can execute the script. Args are accessible via the global Args array.
dotnet script helloworld.csx
Simply create a folder somewhere on your system and issue the following command.
dotnet script init
This will create main.csx
along with the launch configuration needed to debug the script in VS Code.
.
├── .vscode
│ └── launch.json
├── main.csx
└── omnisharp.json
We can also initialize a folder using a custom filename.
dotnet script init custom.csx
Instead of main.csx
which is the default, we now have a file named custom.csx
.
.
├── .vscode
│ └── launch.json
├── custom.csx
└── omnisharp.json
Note: Executing
dotnet script init
inside a folder that already contains one or more script files will not create themain.csx
file.
Scripts can be executed directly from the shell as if they were executables.
foo.csx arg1 arg2 arg3
OSX/Linux
Just like all scripts, on OSX/Linux you need to have a
#!
and mark the file as executable via chmod +x foo.csx. If you use dotnet script init to create your csx it will automatically have the#!
directive and be marked as executable.
The OSX/Linux shebang directive should be #!/usr/bin/env dotnet-script
#!/usr/bin/env dotnet-script
Console.WriteLine("Hello world");
You can execute your script using dotnet script or dotnet-script, which allows you to pass arguments to control your script execution more.
foo.csx arg1 arg2 arg3
dotnet script foo.csx -- arg1 arg2 arg3
dotnet-script foo.csx -- arg1 arg2 arg3
All arguments after --
are passed to the script in the following way:
dotnet script foo.csx -- arg1 arg2 arg3
Then you can access the arguments in the script context using the global Args
collection:
foreach (var arg in Args)
{
Console.WriteLine(arg);
}
All arguments before --
are processed by dotnet script
. For example, the following command-line
dotnet script -d foo.csx -- -d
will pass the -d
before --
to dotnet script
and enable the debug mode whereas the -d
after --
is passed to script for its own interpretation of the argument.
dotnet script
has built-in support for referencing NuGet packages directly from within the script.
#r "nuget: AutoMapper, 6.1.0"
Note: Omnisharp needs to be restarted after adding a new package reference
We can define package sources using a NuGet.Config
file in the script root folder. In addition to being used during execution of the script, it will also be used by OmniSharp
that provides language services for packages resolved from these package sources.
As an alternative to maintaining a local NuGet.Config
file we can define these package sources globally either at the user level or at the computer level as described in Configuring NuGet Behaviour
It is also possible to specify packages sources when executing the script.
dotnet script foo.csx -s https://SomePackageSource
Multiple packages sources can be specified like this:
dotnet script foo.csx -s https://SomePackageSource -s https://AnotherPackageSource
Dotnet-Script can create a standalone executable or DLL for your script.
Switch | Long switch | description |
---|---|---|
-o | --output | Directory where the published executable should be placed. Defaults to a 'publish' folder in the current directory. |
-n | --name | The name for the generated DLL (executable not supported at this time). Defaults to the name of the script. |
--dll | Publish to a .dll instead of an executable. | |
-c | --configuration | Configuration to use for publishing the script [Release/Debug]. Default is "Debug" |
-d | --debug | Enables debug output. |
-r | --runtime | The runtime used when publishing the self contained executable. Defaults to your current runtime. |
The executable you can run directly independent of dotnet install, while the DLL can be run using the dotnet CLI like this:
dotnet script exec {path_to_dll} -- arg1 arg2
We provide two types of caching, the dependency cache
and the execution cache
which is explained in detail below. In order for any of these caches to be enabled, it is required that all NuGet package references are specified using an exact version number. The reason for this constraint is that we need to make sure that we don't execute a script with a stale dependency graph.
In order to resolve the dependencies for a script, a dotnet restore
is executed under the hood to produce a project.assets.json
file from which we can figure out all the dependencies we need to add to the compilation. This is an out-of-process operation and represents a significant overhead to the script execution. So this cache works by looking at all the dependencies specified in the script(s) either in the form of NuGet package references or assembly file references. If these dependencies matches the dependencies from the last script execution, we skip the restore and read the dependencies from the already generated project.assets.json
file. If any of the dependencies has changed, we must restore again to obtain the new dependency graph.
In order to execute a script it needs to be compiled first and since that is a CPU and time consuming operation, we make sure that we only compile when the source code has changed. This works by creating a SHA256 hash from all the script files involved in the execution. This hash is written to a temporary location along with the DLL that represents the result of the script compilation. When a script is executed the hash is computed and compared with the hash from the previous compilation. If they match there is no need to recompile and we run from the already compiled DLL. If the hashes don't match, the cache is invalidated and we recompile.
You can override this automatic caching by passing --no-cache flag, which will bypass both caches and cause dependency resolution and script compilation to happen every time we execute the script.
The temporary location used for caches is a sub-directory named dotnet-script
under (in order of priority):
DOTNET_SCRIPT_CACHE_LOCATION
, if defined and value is not empty.$XDG_CACHE_HOME
if defined otherwise $HOME/.cache
~/Library/Caches
Path.GetTempPath
for the platform.The days of debugging scripts using Console.WriteLine
are over. One major feature of dotnet script
is the ability to debug scripts directly in VS Code. Just set a breakpoint anywhere in your script file(s) and hit F5(start debugging)
Script packages are a way of organizing reusable scripts into NuGet packages that can be consumed by other scripts. This means that we now can leverage scripting infrastructure without the need for any kind of bootstrapping.
A script package is just a regular NuGet package that contains script files inside the content
or contentFiles
folder.
The following example shows how the scripts are laid out inside the NuGet package according to the standard convention .
└── contentFiles
└── csx
└── netstandard2.0
└── main.csx
This example contains just the main.csx
file in the root folder, but packages may have multiple script files either in the root folder or in subfolders below the root folder.
When loading a script package we will look for an entry point script to be loaded. This entry point script is identified by one of the following.
main.csx
in the root folderIf the entry point script cannot be determined, we will simply load all the scripts files in the package.
The advantage with using an entry point script is that we can control loading other scripts from the package.
To consume a script package all we need to do specify the NuGet package in the #load
directive.
The following example loads the simple-targets package that contains script files to be included in our script.
#load "nuget:simple-targets-csx, 6.0.0"
using static SimpleTargets;
var targets = new TargetDictionary();
targets.Add("default", () => Console.WriteLine("Hello, world!"));
Run(Args, targets);
Note: Debugging also works for script packages so that we can easily step into the scripts that are brought in using the
#load
directive.
Scripts don't actually have to exist locally on the machine. We can also execute scripts that are made available on an http(s)
endpoint.
This means that we can create a Gist on Github and execute it just by providing the URL to the Gist.
This Gist contains a script that prints out "Hello World"
We can execute the script like this
dotnet script https://gist.githubusercontent.com/seesharper/5d6859509ea8364a1fdf66bbf5b7923d/raw/0a32bac2c3ea807f9379a38e251d93e39c8131cb/HelloWorld.csx
That is a pretty long URL, so why don't make it a TinyURL like this:
dotnet script https://tinyurl.com/y8cda9zt
A pretty common scenario is that we have logic that is relative to the script path. We don't want to require the user to be in a certain directory for these paths to resolve correctly so here is how to provide the script path and the script folder regardless of the current working directory.
public static string GetScriptPath([CallerFilePath] string path = null) => path;
public static string GetScriptFolder([CallerFilePath] string path = null) => Path.GetDirectoryName(path);
Tip: Put these methods as top level methods in a separate script file and
#load
that file wherever access to the script path and/or folder is needed.
This release contains a C# REPL (Read-Evaluate-Print-Loop). The REPL mode ("interactive mode") is started by executing dotnet-script
without any arguments.
The interactive mode allows you to supply individual C# code blocks and have them executed as soon as you press Enter. The REPL is configured with the same default set of assembly references and using statements as regular CSX script execution.
Once dotnet-script
starts you will see a prompt for input. You can start typing C# code there.
~$ dotnet script
> var x = 1;
> x+x
2
If you submit an unterminated expression into the REPL (no ;
at the end), it will be evaluated and the result will be serialized using a formatter and printed in the output. This is a bit more interesting than just calling ToString()
on the object, because it attempts to capture the actual structure of the object. For example:
~$ dotnet script
> var x = new List<string>();
> x.Add("foo");
> x
List<string>(1) { "foo" }
> x.Add("bar");
> x
List<string>(2) { "foo", "bar" }
>
REPL also supports inline Nuget packages - meaning the Nuget packages can be installed into the REPL from within the REPL. This is done via our #r
and #load
from Nuget support and uses identical syntax.
~$ dotnet script
> #r "nuget: Automapper, 6.1.1"
> using AutoMapper;
> typeof(MapperConfiguration)
[AutoMapper.MapperConfiguration]
> #load "nuget: simple-targets-csx, 6.0.0";
> using static SimpleTargets;
> typeof(TargetDictionary)
[Submission#0+SimpleTargets+TargetDictionary]
Using Roslyn syntax parsing, we also support multiline REPL mode. This means that if you have an uncompleted code block and press Enter, we will automatically enter the multiline mode. The mode is indicated by the *
character. This is particularly useful for declaring classes and other more complex constructs.
~$ dotnet script
> class Foo {
* public string Bar {get; set;}
* }
> var foo = new Foo();
Aside from the regular C# script code, you can invoke the following commands (directives) from within the REPL:
Command | Description |
---|---|
#load | Load a script into the REPL (same as #load usage in CSX) |
#r | Load an assembly into the REPL (same as #r usage in CSX) |
#reset | Reset the REPL back to initial state (without restarting it) |
#cls | Clear the console screen without resetting the REPL state |
#exit | Exits the REPL |
You can execute a CSX script and, at the end of it, drop yourself into the context of the REPL. This way, the REPL becomes "seeded" with your code - all the classes, methods or variables are available in the REPL context. This is achieved by running a script with an -i
flag.
For example, given the following CSX script:
var msg = "Hello World";
Console.WriteLine(msg);
When you run this with the -i
flag, Hello World
is printed, REPL starts and msg
variable is available in the REPL context.
~$ dotnet script foo.csx -i
Hello World
>
You can also seed the REPL from inside the REPL - at any point - by invoking a #load
directive pointed at a specific file. For example:
~$ dotnet script
> #load "foo.csx"
Hello World
>
The following example shows how we can pipe data in and out of a script.
The UpperCase.csx
script simply converts the standard input to upper case and writes it back out to standard output.
using (var streamReader = new StreamReader(Console.OpenStandardInput()))
{
Write(streamReader.ReadToEnd().ToUpper());
}
We can now simply pipe the output from one command into our script like this.
echo "This is some text" | dotnet script UpperCase.csx
THIS IS SOME TEXT
The first thing we need to do add the following to the launch.config
file that allows VS Code to debug a running process.
{
"name": ".NET Core Attach",
"type": "coreclr",
"request": "attach",
"processId": "${command:pickProcess}"
}
To debug this script we need a way to attach the debugger in VS Code and the simplest thing we can do here is to wait for the debugger to attach by adding this method somewhere.
public static void WaitForDebugger()
{
Console.WriteLine("Attach Debugger (VS Code)");
while(!Debugger.IsAttached)
{
}
}
To debug the script when executing it from the command line we can do something like
WaitForDebugger();
using (var streamReader = new StreamReader(Console.OpenStandardInput()))
{
Write(streamReader.ReadToEnd().ToUpper()); // <- SET BREAKPOINT HERE
}
Now when we run the script from the command line we will get
$ echo "This is some text" | dotnet script UpperCase.csx
Attach Debugger (VS Code)
This now gives us a chance to attach the debugger before stepping into the script and from VS Code, select the .NET Core Attach
debugger and pick the process that represents the executing script.
Once that is done we should see our breakpoint being hit.
By default, scripts will be compiled using the debug
configuration. This is to ensure that we can debug a script in VS Code as well as attaching a debugger for long running scripts.
There are however situations where we might need to execute a script that is compiled with the release
configuration. For instance, running benchmarks using BenchmarkDotNet is not possible unless the script is compiled with the release
configuration.
We can specify this when executing the script.
dotnet script foo.csx -c release
Starting from version 0.50.0, dotnet-script
supports .Net Core 3.0 and all the C# 8 features. The way we deal with nullable references types in dotnet-script
is that we turn every warning related to nullable reference types into compiler errors. This means every warning between CS8600
and CS8655
are treated as an error when compiling the script.
Nullable references types are turned off by default and the way we enable it is using the #nullable enable
compiler directive. This means that existing scripts will continue to work, but we can now opt-in on this new feature.
#!/usr/bin/env dotnet-script
#nullable enable
string name = null;
Trying to execute the script will result in the following error
main.csx(5,15): error CS8625: Cannot convert null literal to non-nullable reference type.
We will also see this when working with scripts in VS Code under the problems panel.
Download Details:
Author: filipw
Source Code: https://github.com/filipw/dotnet-script
License: MIT License
1667425440
Perl script converts PDF files to Gerber format
Pdf2Gerb generates Gerber 274X photoplotting and Excellon drill files from PDFs of a PCB. Up to three PDFs are used: the top copper layer, the bottom copper layer (for 2-sided PCBs), and an optional silk screen layer. The PDFs can be created directly from any PDF drawing software, or a PDF print driver can be used to capture the Print output if the drawing software does not directly support output to PDF.
The general workflow is as follows:
Please note that Pdf2Gerb does NOT perform DRC (Design Rule Checks), as these will vary according to individual PCB manufacturer conventions and capabilities. Also note that Pdf2Gerb is not perfect, so the output files must always be checked before submitting them. As of version 1.6, Pdf2Gerb supports most PCB elements, such as round and square pads, round holes, traces, SMD pads, ground planes, no-fill areas, and panelization. However, because it interprets the graphical output of a Print function, there are limitations in what it can recognize (or there may be bugs).
See docs/Pdf2Gerb.pdf for install/setup, config, usage, and other info.
#Pdf2Gerb config settings:
#Put this file in same folder/directory as pdf2gerb.pl itself (global settings),
#or copy to another folder/directory with PDFs if you want PCB-specific settings.
#There is only one user of this file, so we don't need a custom package or namespace.
#NOTE: all constants defined in here will be added to main namespace.
#package pdf2gerb_cfg;
use strict; #trap undef vars (easier debug)
use warnings; #other useful info (easier debug)
##############################################################################################
#configurable settings:
#change values here instead of in main pfg2gerb.pl file
use constant WANT_COLORS => ($^O !~ m/Win/); #ANSI colors no worky on Windows? this must be set < first DebugPrint() call
#just a little warning; set realistic expectations:
#DebugPrint("${\(CYAN)}Pdf2Gerb.pl ${\(VERSION)}, $^O O/S\n${\(YELLOW)}${\(BOLD)}${\(ITALIC)}This is EXPERIMENTAL software. \nGerber files MAY CONTAIN ERRORS. Please CHECK them before fabrication!${\(RESET)}", 0); #if WANT_DEBUG
use constant METRIC => FALSE; #set to TRUE for metric units (only affect final numbers in output files, not internal arithmetic)
use constant APERTURE_LIMIT => 0; #34; #max #apertures to use; generate warnings if too many apertures are used (0 to not check)
use constant DRILL_FMT => '2.4'; #'2.3'; #'2.4' is the default for PCB fab; change to '2.3' for CNC
use constant WANT_DEBUG => 0; #10; #level of debug wanted; higher == more, lower == less, 0 == none
use constant GERBER_DEBUG => 0; #level of debug to include in Gerber file; DON'T USE FOR FABRICATION
use constant WANT_STREAMS => FALSE; #TRUE; #save decompressed streams to files (for debug)
use constant WANT_ALLINPUT => FALSE; #TRUE; #save entire input stream (for debug ONLY)
#DebugPrint(sprintf("${\(CYAN)}DEBUG: stdout %d, gerber %d, want streams? %d, all input? %d, O/S: $^O, Perl: $]${\(RESET)}\n", WANT_DEBUG, GERBER_DEBUG, WANT_STREAMS, WANT_ALLINPUT), 1);
#DebugPrint(sprintf("max int = %d, min int = %d\n", MAXINT, MININT), 1);
#define standard trace and pad sizes to reduce scaling or PDF rendering errors:
#This avoids weird aperture settings and replaces them with more standardized values.
#(I'm not sure how photoplotters handle strange sizes).
#Fewer choices here gives more accurate mapping in the final Gerber files.
#units are in inches
use constant TOOL_SIZES => #add more as desired
(
#round or square pads (> 0) and drills (< 0):
.010, -.001, #tiny pads for SMD; dummy drill size (too small for practical use, but needed so StandardTool will use this entry)
.031, -.014, #used for vias
.041, -.020, #smallest non-filled plated hole
.051, -.025,
.056, -.029, #useful for IC pins
.070, -.033,
.075, -.040, #heavier leads
# .090, -.043, #NOTE: 600 dpi is not high enough resolution to reliably distinguish between .043" and .046", so choose 1 of the 2 here
.100, -.046,
.115, -.052,
.130, -.061,
.140, -.067,
.150, -.079,
.175, -.088,
.190, -.093,
.200, -.100,
.220, -.110,
.160, -.125, #useful for mounting holes
#some additional pad sizes without holes (repeat a previous hole size if you just want the pad size):
.090, -.040, #want a .090 pad option, but use dummy hole size
.065, -.040, #.065 x .065 rect pad
.035, -.040, #.035 x .065 rect pad
#traces:
.001, #too thin for real traces; use only for board outlines
.006, #minimum real trace width; mainly used for text
.008, #mainly used for mid-sized text, not traces
.010, #minimum recommended trace width for low-current signals
.012,
.015, #moderate low-voltage current
.020, #heavier trace for power, ground (even if a lighter one is adequate)
.025,
.030, #heavy-current traces; be careful with these ones!
.040,
.050,
.060,
.080,
.100,
.120,
);
#Areas larger than the values below will be filled with parallel lines:
#This cuts down on the number of aperture sizes used.
#Set to 0 to always use an aperture or drill, regardless of size.
use constant { MAX_APERTURE => max((TOOL_SIZES)) + .004, MAX_DRILL => -min((TOOL_SIZES)) + .004 }; #max aperture and drill sizes (plus a little tolerance)
#DebugPrint(sprintf("using %d standard tool sizes: %s, max aper %.3f, max drill %.3f\n", scalar((TOOL_SIZES)), join(", ", (TOOL_SIZES)), MAX_APERTURE, MAX_DRILL), 1);
#NOTE: Compare the PDF to the original CAD file to check the accuracy of the PDF rendering and parsing!
#for example, the CAD software I used generated the following circles for holes:
#CAD hole size: parsed PDF diameter: error:
# .014 .016 +.002
# .020 .02267 +.00267
# .025 .026 +.001
# .029 .03167 +.00267
# .033 .036 +.003
# .040 .04267 +.00267
#This was usually ~ .002" - .003" too big compared to the hole as displayed in the CAD software.
#To compensate for PDF rendering errors (either during CAD Print function or PDF parsing logic), adjust the values below as needed.
#units are pixels; for example, a value of 2.4 at 600 dpi = .0004 inch, 2 at 600 dpi = .0033"
use constant
{
HOLE_ADJUST => -0.004 * 600, #-2.6, #holes seemed to be slightly oversized (by .002" - .004"), so shrink them a little
RNDPAD_ADJUST => -0.003 * 600, #-2, #-2.4, #round pads seemed to be slightly oversized, so shrink them a little
SQRPAD_ADJUST => +0.001 * 600, #+.5, #square pads are sometimes too small by .00067, so bump them up a little
RECTPAD_ADJUST => 0, #(pixels) rectangular pads seem to be okay? (not tested much)
TRACE_ADJUST => 0, #(pixels) traces seemed to be okay?
REDUCE_TOLERANCE => .001, #(inches) allow this much variation when reducing circles and rects
};
#Also, my CAD's Print function or the PDF print driver I used was a little off for circles, so define some additional adjustment values here:
#Values are added to X/Y coordinates; units are pixels; for example, a value of 1 at 600 dpi would be ~= .002 inch
use constant
{
CIRCLE_ADJUST_MINX => 0,
CIRCLE_ADJUST_MINY => -0.001 * 600, #-1, #circles were a little too high, so nudge them a little lower
CIRCLE_ADJUST_MAXX => +0.001 * 600, #+1, #circles were a little too far to the left, so nudge them a little to the right
CIRCLE_ADJUST_MAXY => 0,
SUBST_CIRCLE_CLIPRECT => FALSE, #generate circle and substitute for clip rects (to compensate for the way some CAD software draws circles)
WANT_CLIPRECT => TRUE, #FALSE, #AI doesn't need clip rect at all? should be on normally?
RECT_COMPLETION => FALSE, #TRUE, #fill in 4th side of rect when 3 sides found
};
#allow .012 clearance around pads for solder mask:
#This value effectively adjusts pad sizes in the TOOL_SIZES list above (only for solder mask layers).
use constant SOLDER_MARGIN => +.012; #units are inches
#line join/cap styles:
use constant
{
CAP_NONE => 0, #butt (none); line is exact length
CAP_ROUND => 1, #round cap/join; line overhangs by a semi-circle at either end
CAP_SQUARE => 2, #square cap/join; line overhangs by a half square on either end
CAP_OVERRIDE => FALSE, #cap style overrides drawing logic
};
#number of elements in each shape type:
use constant
{
RECT_SHAPELEN => 6, #x0, y0, x1, y1, count, "rect" (start, end corners)
LINE_SHAPELEN => 6, #x0, y0, x1, y1, count, "line" (line seg)
CURVE_SHAPELEN => 10, #xstart, ystart, x0, y0, x1, y1, xend, yend, count, "curve" (bezier 2 points)
CIRCLE_SHAPELEN => 5, #x, y, 5, count, "circle" (center + radius)
};
#const my %SHAPELEN =
#Readonly my %SHAPELEN =>
our %SHAPELEN =
(
rect => RECT_SHAPELEN,
line => LINE_SHAPELEN,
curve => CURVE_SHAPELEN,
circle => CIRCLE_SHAPELEN,
);
#panelization:
#This will repeat the entire body the number of times indicated along the X or Y axes (files grow accordingly).
#Display elements that overhang PCB boundary can be squashed or left as-is (typically text or other silk screen markings).
#Set "overhangs" TRUE to allow overhangs, FALSE to truncate them.
#xpad and ypad allow margins to be added around outer edge of panelized PCB.
use constant PANELIZE => {'x' => 1, 'y' => 1, 'xpad' => 0, 'ypad' => 0, 'overhangs' => TRUE}; #number of times to repeat in X and Y directions
# Set this to 1 if you need TurboCAD support.
#$turboCAD = FALSE; #is this still needed as an option?
#CIRCAD pad generation uses an appropriate aperture, then moves it (stroke) "a little" - we use this to find pads and distinguish them from PCB holes.
use constant PAD_STROKE => 0.3; #0.0005 * 600; #units are pixels
#convert very short traces to pads or holes:
use constant TRACE_MINLEN => .001; #units are inches
#use constant ALWAYS_XY => TRUE; #FALSE; #force XY even if X or Y doesn't change; NOTE: needs to be TRUE for all pads to show in FlatCAM and ViewPlot
use constant REMOVE_POLARITY => FALSE; #TRUE; #set to remove subtractive (negative) polarity; NOTE: must be FALSE for ground planes
#PDF uses "points", each point = 1/72 inch
#combined with a PDF scale factor of .12, this gives 600 dpi resolution (1/72 * .12 = 600 dpi)
use constant INCHES_PER_POINT => 1/72; #0.0138888889; #multiply point-size by this to get inches
# The precision used when computing a bezier curve. Higher numbers are more precise but slower (and generate larger files).
#$bezierPrecision = 100;
use constant BEZIER_PRECISION => 36; #100; #use const; reduced for faster rendering (mainly used for silk screen and thermal pads)
# Ground planes and silk screen or larger copper rectangles or circles are filled line-by-line using this resolution.
use constant FILL_WIDTH => .01; #fill at most 0.01 inch at a time
# The max number of characters to read into memory
use constant MAX_BYTES => 10 * M; #bumped up to 10 MB, use const
use constant DUP_DRILL1 => TRUE; #FALSE; #kludge: ViewPlot doesn't load drill files that are too small so duplicate first tool
my $runtime = time(); #Time::HiRes::gettimeofday(); #measure my execution time
print STDERR "Loaded config settings from '${\(__FILE__)}'.\n";
1; #last value must be truthful to indicate successful load
#############################################################################################
#junk/experiment:
#use Package::Constants;
#use Exporter qw(import); #https://perldoc.perl.org/Exporter.html
#my $caller = "pdf2gerb::";
#sub cfg
#{
# my $proto = shift;
# my $class = ref($proto) || $proto;
# my $settings =
# {
# $WANT_DEBUG => 990, #10; #level of debug wanted; higher == more, lower == less, 0 == none
# };
# bless($settings, $class);
# return $settings;
#}
#use constant HELLO => "hi there2"; #"main::HELLO" => "hi there";
#use constant GOODBYE => 14; #"main::GOODBYE" => 12;
#print STDERR "read cfg file\n";
#our @EXPORT_OK = Package::Constants->list(__PACKAGE__); #https://www.perlmonks.org/?node_id=1072691; NOTE: "_OK" skips short/common names
#print STDERR scalar(@EXPORT_OK) . " consts exported:\n";
#foreach(@EXPORT_OK) { print STDERR "$_\n"; }
#my $val = main::thing("xyz");
#print STDERR "caller gave me $val\n";
#foreach my $arg (@ARGV) { print STDERR "arg $arg\n"; }
Author: swannman
Source Code: https://github.com/swannman/pdf2gerb
License: GPL-3.0 license
1620466520
If you accumulate data on which you base your decision-making as an organization, you should probably think about your data architecture and possible best practices.
If you accumulate data on which you base your decision-making as an organization, you most probably need to think about your data architecture and consider possible best practices. Gaining a competitive edge, remaining customer-centric to the greatest extent possible, and streamlining processes to get on-the-button outcomes can all be traced back to an organization’s capacity to build a future-ready data architecture.
In what follows, we offer a short overview of the overarching capabilities of data architecture. These include user-centricity, elasticity, robustness, and the capacity to ensure the seamless flow of data at all times. Added to these are automation enablement, plus security and data governance considerations. These points from our checklist for what we perceive to be an anticipatory analytics ecosystem.
#big data #data science #big data analytics #data analysis #data architecture #data transformation #data platform #data strategy #cloud data platform #data acquisition
1595201363
First thing, we will need a table and i am creating products table for this example. So run the following query to create table.
CREATE TABLE `products` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
`description` varchar(255) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`created_at` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
`updated_at` datetime DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=7 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
Next, we will need to insert some dummy records in this table that will be deleted.
INSERT INTO `products` (`name`, `description`) VALUES
('Test product 1', 'Product description example1'),
('Test product 2', 'Product description example2'),
('Test product 3', 'Product description example3'),
('Test product 4', 'Product description example4'),
('Test product 5', 'Product description example5');
Now we are redy to create a model corresponding to this products table. Here we will create Product model. So let’s create a model file Product.php file under app directory and put the code below.
<?php
namespace App;
use Illuminate\Database\Eloquent\Model;
class Product extends Model
{
protected $fillable = [
'name','description'
];
}
Now, in this second step we will create some routes to handle the request for this example. So opeen routes/web.php file and copy the routes as given below.
routes/web.php
Route::get('product', 'ProductController@index');
Route::delete('product/{id}', ['as'=>'product.destroy','uses'=>'ProductController@destroy']);
Route::delete('delete-multiple-product', ['as'=>'product.multiple-delete','uses'=>'ProductController@deleteMultiple']);
#laravel #delete multiple rows in laravel using ajax #laravel ajax delete #laravel ajax multiple checkbox delete #laravel delete multiple rows #laravel delete records using ajax #laravel multiple checkbox delete rows #laravel multiple delete