1658568660
Laravel Trusted Proxies
Setting a trusted proxy allows for correct URL generation, redirecting, session handling and logging in Laravel when behind a reverse proxy such as a load balancer or cache.
Laravel 5.5+ comes with this package. If you are using Laravel 5.5 or greater, you do not need to add this to your project separately.
fideloper/proxy:~3.3
)fideloper/proxy:^4.0
)fideloper/proxy:^4.0
)fideloper/proxy:^4.0
)fideloper/proxy:^4.0
)fideloper/proxy:^4.2
)fideloper/proxy:^4.3
)To install Trusted Proxy, use:
composer require fideloper/proxy:^3.3
composer require fideloper/proxy:^2.0
Refer to the docs above for using Trusted Proxy in Laravel 5.5+. For Laravel 4.0 - 5.4, refer to the wiki.
Setting a trusted proxy allows for correct URL generation, redirecting, session handling and logging in Laravel when behind a reverse proxy.
This is useful if your web servers sit behind a load balancer (Nginx, HAProxy, Envoy, ELB/ALB, etc), HTTP cache (CloudFlare, Squid, Varnish, etc), or other intermediary (reverse) proxy.
Applications behind a reverse proxy typically read some HTTP headers such as X-Forwarded
, X-Forwarded-For
, X-Forwarded-Proto
(and more) to know about the real end-client making an HTTP request.
If those headers were not set, then the application code would think every incoming HTTP request would be from the proxy.
Laravel (technically the Symfony HTTP base classes) have a concept of a "trusted proxy", where those X-Forwarded
headers will only be used if the source IP address of the request is known. In other words, it only trusts those headers if the proxy is trusted.
This package creates an easier interface to that option. You can set the IP addresses of the proxies (that the application would see, so it may be a private network IP address), and the Symfony HTTP classes will know to use the X-Forwarded
headers if an HTTP requets containing those headers was from the trusted proxy.
A very common load balancing approach is to send https://
requests to a load balancer, but send http://
requests to the application servers behind the load balancer.
For example, you may send a request in your browser to https://example.org
. The load balancer, in turn, might send requests to an application server at http://192.168.1.23
.
What if that server returns a redirect, or generates an asset url? The users's browser would get back a redirect or HTML that includes http://192.168.1.23
in it, which is clearly wrong.
What happens is that the application thinks its hostname is 192.168.1.23
and the schema is http://
. It doesn't know that the end client used https://example.org
for its web request.
So the application needs to know to read the X-Forwarded
headers to get the correct request details (schema https://
, host example.org
).
Laravel/Symfony automatically reads those headers, but only if the trusted proxy configuration is set to "trust" the load balancer/reverse proxy.
Note: Many of us use hosted load balancers/proxies such as AWS ELB/ALB, etc. We don't know the IP address of those reverse proxies, and so you need to trusted all proxies in that case.
The trade-off there is running the security risk of allowing people to potentially spoof the
X-Forwarded
headers.
This Wiki page has a list of popular services and their IP addresses of their servers, if available. Any updates or suggestions are welcome!
Author: Fideloper
Source Code: https://github.com/fideloper/TrustedProxy
License: MIT license
1647064260
Run C# scripts from the .NET CLI, define NuGet packages inline and edit/debug them in VS Code - all of that with full language services support from OmniSharp.
Name | Version | Framework(s) |
---|---|---|
dotnet-script (global tool) | net6.0 , net5.0 , netcoreapp3.1 | |
Dotnet.Script (CLI as Nuget) | net6.0 , net5.0 , netcoreapp3.1 | |
Dotnet.Script.Core | netcoreapp3.1 , netstandard2.0 | |
Dotnet.Script.DependencyModel | netstandard2.0 | |
Dotnet.Script.DependencyModel.Nuget | netstandard2.0 |
The only thing we need to install is .NET Core 3.1 or .NET 5.0 SDK.
.NET Core 2.1 introduced the concept of global tools meaning that you can install dotnet-script
using nothing but the .NET CLI.
dotnet tool install -g dotnet-script
You can invoke the tool using the following command: dotnet-script
Tool 'dotnet-script' (version '0.22.0') was successfully installed.
The advantage of this approach is that you can use the same command for installation across all platforms. .NET Core SDK also supports viewing a list of installed tools and their uninstallation.
dotnet tool list -g
Package Id Version Commands
---------------------------------------------
dotnet-script 0.22.0 dotnet-script
dotnet tool uninstall dotnet-script -g
Tool 'dotnet-script' (version '0.22.0') was successfully uninstalled.
choco install dotnet.script
We also provide a PowerShell script for installation.
(new-object Net.WebClient).DownloadString("https://raw.githubusercontent.com/filipw/dotnet-script/master/install/install.ps1") | iex
curl -s https://raw.githubusercontent.com/filipw/dotnet-script/master/install/install.sh | bash
If permission is denied we can try with sudo
curl -s https://raw.githubusercontent.com/filipw/dotnet-script/master/install/install.sh | sudo bash
A Dockerfile for running dotnet-script in a Linux container is available. Build:
cd build
docker build -t dotnet-script -f Dockerfile ..
And run:
docker run -it dotnet-script --version
You can manually download all the releases in zip
format from the GitHub releases page.
Our typical helloworld.csx
might look like this:
Console.WriteLine("Hello world!");
That is all it takes and we can execute the script. Args are accessible via the global Args array.
dotnet script helloworld.csx
Simply create a folder somewhere on your system and issue the following command.
dotnet script init
This will create main.csx
along with the launch configuration needed to debug the script in VS Code.
.
├── .vscode
│ └── launch.json
├── main.csx
└── omnisharp.json
We can also initialize a folder using a custom filename.
dotnet script init custom.csx
Instead of main.csx
which is the default, we now have a file named custom.csx
.
.
├── .vscode
│ └── launch.json
├── custom.csx
└── omnisharp.json
Note: Executing
dotnet script init
inside a folder that already contains one or more script files will not create themain.csx
file.
Scripts can be executed directly from the shell as if they were executables.
foo.csx arg1 arg2 arg3
OSX/Linux
Just like all scripts, on OSX/Linux you need to have a
#!
and mark the file as executable via chmod +x foo.csx. If you use dotnet script init to create your csx it will automatically have the#!
directive and be marked as executable.
The OSX/Linux shebang directive should be #!/usr/bin/env dotnet-script
#!/usr/bin/env dotnet-script
Console.WriteLine("Hello world");
You can execute your script using dotnet script or dotnet-script, which allows you to pass arguments to control your script execution more.
foo.csx arg1 arg2 arg3
dotnet script foo.csx -- arg1 arg2 arg3
dotnet-script foo.csx -- arg1 arg2 arg3
All arguments after --
are passed to the script in the following way:
dotnet script foo.csx -- arg1 arg2 arg3
Then you can access the arguments in the script context using the global Args
collection:
foreach (var arg in Args)
{
Console.WriteLine(arg);
}
All arguments before --
are processed by dotnet script
. For example, the following command-line
dotnet script -d foo.csx -- -d
will pass the -d
before --
to dotnet script
and enable the debug mode whereas the -d
after --
is passed to script for its own interpretation of the argument.
dotnet script
has built-in support for referencing NuGet packages directly from within the script.
#r "nuget: AutoMapper, 6.1.0"
Note: Omnisharp needs to be restarted after adding a new package reference
We can define package sources using a NuGet.Config
file in the script root folder. In addition to being used during execution of the script, it will also be used by OmniSharp
that provides language services for packages resolved from these package sources.
As an alternative to maintaining a local NuGet.Config
file we can define these package sources globally either at the user level or at the computer level as described in Configuring NuGet Behaviour
It is also possible to specify packages sources when executing the script.
dotnet script foo.csx -s https://SomePackageSource
Multiple packages sources can be specified like this:
dotnet script foo.csx -s https://SomePackageSource -s https://AnotherPackageSource
Dotnet-Script can create a standalone executable or DLL for your script.
Switch | Long switch | description |
---|---|---|
-o | --output | Directory where the published executable should be placed. Defaults to a 'publish' folder in the current directory. |
-n | --name | The name for the generated DLL (executable not supported at this time). Defaults to the name of the script. |
--dll | Publish to a .dll instead of an executable. | |
-c | --configuration | Configuration to use for publishing the script [Release/Debug]. Default is "Debug" |
-d | --debug | Enables debug output. |
-r | --runtime | The runtime used when publishing the self contained executable. Defaults to your current runtime. |
The executable you can run directly independent of dotnet install, while the DLL can be run using the dotnet CLI like this:
dotnet script exec {path_to_dll} -- arg1 arg2
We provide two types of caching, the dependency cache
and the execution cache
which is explained in detail below. In order for any of these caches to be enabled, it is required that all NuGet package references are specified using an exact version number. The reason for this constraint is that we need to make sure that we don't execute a script with a stale dependency graph.
In order to resolve the dependencies for a script, a dotnet restore
is executed under the hood to produce a project.assets.json
file from which we can figure out all the dependencies we need to add to the compilation. This is an out-of-process operation and represents a significant overhead to the script execution. So this cache works by looking at all the dependencies specified in the script(s) either in the form of NuGet package references or assembly file references. If these dependencies matches the dependencies from the last script execution, we skip the restore and read the dependencies from the already generated project.assets.json
file. If any of the dependencies has changed, we must restore again to obtain the new dependency graph.
In order to execute a script it needs to be compiled first and since that is a CPU and time consuming operation, we make sure that we only compile when the source code has changed. This works by creating a SHA256 hash from all the script files involved in the execution. This hash is written to a temporary location along with the DLL that represents the result of the script compilation. When a script is executed the hash is computed and compared with the hash from the previous compilation. If they match there is no need to recompile and we run from the already compiled DLL. If the hashes don't match, the cache is invalidated and we recompile.
You can override this automatic caching by passing --no-cache flag, which will bypass both caches and cause dependency resolution and script compilation to happen every time we execute the script.
The temporary location used for caches is a sub-directory named dotnet-script
under (in order of priority):
DOTNET_SCRIPT_CACHE_LOCATION
, if defined and value is not empty.$XDG_CACHE_HOME
if defined otherwise $HOME/.cache
~/Library/Caches
Path.GetTempPath
for the platform.The days of debugging scripts using Console.WriteLine
are over. One major feature of dotnet script
is the ability to debug scripts directly in VS Code. Just set a breakpoint anywhere in your script file(s) and hit F5(start debugging)
Script packages are a way of organizing reusable scripts into NuGet packages that can be consumed by other scripts. This means that we now can leverage scripting infrastructure without the need for any kind of bootstrapping.
A script package is just a regular NuGet package that contains script files inside the content
or contentFiles
folder.
The following example shows how the scripts are laid out inside the NuGet package according to the standard convention .
└── contentFiles
└── csx
└── netstandard2.0
└── main.csx
This example contains just the main.csx
file in the root folder, but packages may have multiple script files either in the root folder or in subfolders below the root folder.
When loading a script package we will look for an entry point script to be loaded. This entry point script is identified by one of the following.
main.csx
in the root folderIf the entry point script cannot be determined, we will simply load all the scripts files in the package.
The advantage with using an entry point script is that we can control loading other scripts from the package.
To consume a script package all we need to do specify the NuGet package in the #load
directive.
The following example loads the simple-targets package that contains script files to be included in our script.
#load "nuget:simple-targets-csx, 6.0.0"
using static SimpleTargets;
var targets = new TargetDictionary();
targets.Add("default", () => Console.WriteLine("Hello, world!"));
Run(Args, targets);
Note: Debugging also works for script packages so that we can easily step into the scripts that are brought in using the
#load
directive.
Scripts don't actually have to exist locally on the machine. We can also execute scripts that are made available on an http(s)
endpoint.
This means that we can create a Gist on Github and execute it just by providing the URL to the Gist.
This Gist contains a script that prints out "Hello World"
We can execute the script like this
dotnet script https://gist.githubusercontent.com/seesharper/5d6859509ea8364a1fdf66bbf5b7923d/raw/0a32bac2c3ea807f9379a38e251d93e39c8131cb/HelloWorld.csx
That is a pretty long URL, so why don't make it a TinyURL like this:
dotnet script https://tinyurl.com/y8cda9zt
A pretty common scenario is that we have logic that is relative to the script path. We don't want to require the user to be in a certain directory for these paths to resolve correctly so here is how to provide the script path and the script folder regardless of the current working directory.
public static string GetScriptPath([CallerFilePath] string path = null) => path;
public static string GetScriptFolder([CallerFilePath] string path = null) => Path.GetDirectoryName(path);
Tip: Put these methods as top level methods in a separate script file and
#load
that file wherever access to the script path and/or folder is needed.
This release contains a C# REPL (Read-Evaluate-Print-Loop). The REPL mode ("interactive mode") is started by executing dotnet-script
without any arguments.
The interactive mode allows you to supply individual C# code blocks and have them executed as soon as you press Enter. The REPL is configured with the same default set of assembly references and using statements as regular CSX script execution.
Once dotnet-script
starts you will see a prompt for input. You can start typing C# code there.
~$ dotnet script
> var x = 1;
> x+x
2
If you submit an unterminated expression into the REPL (no ;
at the end), it will be evaluated and the result will be serialized using a formatter and printed in the output. This is a bit more interesting than just calling ToString()
on the object, because it attempts to capture the actual structure of the object. For example:
~$ dotnet script
> var x = new List<string>();
> x.Add("foo");
> x
List<string>(1) { "foo" }
> x.Add("bar");
> x
List<string>(2) { "foo", "bar" }
>
REPL also supports inline Nuget packages - meaning the Nuget packages can be installed into the REPL from within the REPL. This is done via our #r
and #load
from Nuget support and uses identical syntax.
~$ dotnet script
> #r "nuget: Automapper, 6.1.1"
> using AutoMapper;
> typeof(MapperConfiguration)
[AutoMapper.MapperConfiguration]
> #load "nuget: simple-targets-csx, 6.0.0";
> using static SimpleTargets;
> typeof(TargetDictionary)
[Submission#0+SimpleTargets+TargetDictionary]
Using Roslyn syntax parsing, we also support multiline REPL mode. This means that if you have an uncompleted code block and press Enter, we will automatically enter the multiline mode. The mode is indicated by the *
character. This is particularly useful for declaring classes and other more complex constructs.
~$ dotnet script
> class Foo {
* public string Bar {get; set;}
* }
> var foo = new Foo();
Aside from the regular C# script code, you can invoke the following commands (directives) from within the REPL:
Command | Description |
---|---|
#load | Load a script into the REPL (same as #load usage in CSX) |
#r | Load an assembly into the REPL (same as #r usage in CSX) |
#reset | Reset the REPL back to initial state (without restarting it) |
#cls | Clear the console screen without resetting the REPL state |
#exit | Exits the REPL |
You can execute a CSX script and, at the end of it, drop yourself into the context of the REPL. This way, the REPL becomes "seeded" with your code - all the classes, methods or variables are available in the REPL context. This is achieved by running a script with an -i
flag.
For example, given the following CSX script:
var msg = "Hello World";
Console.WriteLine(msg);
When you run this with the -i
flag, Hello World
is printed, REPL starts and msg
variable is available in the REPL context.
~$ dotnet script foo.csx -i
Hello World
>
You can also seed the REPL from inside the REPL - at any point - by invoking a #load
directive pointed at a specific file. For example:
~$ dotnet script
> #load "foo.csx"
Hello World
>
The following example shows how we can pipe data in and out of a script.
The UpperCase.csx
script simply converts the standard input to upper case and writes it back out to standard output.
using (var streamReader = new StreamReader(Console.OpenStandardInput()))
{
Write(streamReader.ReadToEnd().ToUpper());
}
We can now simply pipe the output from one command into our script like this.
echo "This is some text" | dotnet script UpperCase.csx
THIS IS SOME TEXT
The first thing we need to do add the following to the launch.config
file that allows VS Code to debug a running process.
{
"name": ".NET Core Attach",
"type": "coreclr",
"request": "attach",
"processId": "${command:pickProcess}"
}
To debug this script we need a way to attach the debugger in VS Code and the simplest thing we can do here is to wait for the debugger to attach by adding this method somewhere.
public static void WaitForDebugger()
{
Console.WriteLine("Attach Debugger (VS Code)");
while(!Debugger.IsAttached)
{
}
}
To debug the script when executing it from the command line we can do something like
WaitForDebugger();
using (var streamReader = new StreamReader(Console.OpenStandardInput()))
{
Write(streamReader.ReadToEnd().ToUpper()); // <- SET BREAKPOINT HERE
}
Now when we run the script from the command line we will get
$ echo "This is some text" | dotnet script UpperCase.csx
Attach Debugger (VS Code)
This now gives us a chance to attach the debugger before stepping into the script and from VS Code, select the .NET Core Attach
debugger and pick the process that represents the executing script.
Once that is done we should see our breakpoint being hit.
By default, scripts will be compiled using the debug
configuration. This is to ensure that we can debug a script in VS Code as well as attaching a debugger for long running scripts.
There are however situations where we might need to execute a script that is compiled with the release
configuration. For instance, running benchmarks using BenchmarkDotNet is not possible unless the script is compiled with the release
configuration.
We can specify this when executing the script.
dotnet script foo.csx -c release
Starting from version 0.50.0, dotnet-script
supports .Net Core 3.0 and all the C# 8 features. The way we deal with nullable references types in dotnet-script
is that we turn every warning related to nullable reference types into compiler errors. This means every warning between CS8600
and CS8655
are treated as an error when compiling the script.
Nullable references types are turned off by default and the way we enable it is using the #nullable enable
compiler directive. This means that existing scripts will continue to work, but we can now opt-in on this new feature.
#!/usr/bin/env dotnet-script
#nullable enable
string name = null;
Trying to execute the script will result in the following error
main.csx(5,15): error CS8625: Cannot convert null literal to non-nullable reference type.
We will also see this when working with scripts in VS Code under the problems panel.
Download Details:
Author: filipw
Source Code: https://github.com/filipw/dotnet-script
License: MIT License
1658568660
Laravel Trusted Proxies
Setting a trusted proxy allows for correct URL generation, redirecting, session handling and logging in Laravel when behind a reverse proxy such as a load balancer or cache.
Laravel 5.5+ comes with this package. If you are using Laravel 5.5 or greater, you do not need to add this to your project separately.
fideloper/proxy:~3.3
)fideloper/proxy:^4.0
)fideloper/proxy:^4.0
)fideloper/proxy:^4.0
)fideloper/proxy:^4.0
)fideloper/proxy:^4.2
)fideloper/proxy:^4.3
)To install Trusted Proxy, use:
composer require fideloper/proxy:^3.3
composer require fideloper/proxy:^2.0
Refer to the docs above for using Trusted Proxy in Laravel 5.5+. For Laravel 4.0 - 5.4, refer to the wiki.
Setting a trusted proxy allows for correct URL generation, redirecting, session handling and logging in Laravel when behind a reverse proxy.
This is useful if your web servers sit behind a load balancer (Nginx, HAProxy, Envoy, ELB/ALB, etc), HTTP cache (CloudFlare, Squid, Varnish, etc), or other intermediary (reverse) proxy.
Applications behind a reverse proxy typically read some HTTP headers such as X-Forwarded
, X-Forwarded-For
, X-Forwarded-Proto
(and more) to know about the real end-client making an HTTP request.
If those headers were not set, then the application code would think every incoming HTTP request would be from the proxy.
Laravel (technically the Symfony HTTP base classes) have a concept of a "trusted proxy", where those X-Forwarded
headers will only be used if the source IP address of the request is known. In other words, it only trusts those headers if the proxy is trusted.
This package creates an easier interface to that option. You can set the IP addresses of the proxies (that the application would see, so it may be a private network IP address), and the Symfony HTTP classes will know to use the X-Forwarded
headers if an HTTP requets containing those headers was from the trusted proxy.
A very common load balancing approach is to send https://
requests to a load balancer, but send http://
requests to the application servers behind the load balancer.
For example, you may send a request in your browser to https://example.org
. The load balancer, in turn, might send requests to an application server at http://192.168.1.23
.
What if that server returns a redirect, or generates an asset url? The users's browser would get back a redirect or HTML that includes http://192.168.1.23
in it, which is clearly wrong.
What happens is that the application thinks its hostname is 192.168.1.23
and the schema is http://
. It doesn't know that the end client used https://example.org
for its web request.
So the application needs to know to read the X-Forwarded
headers to get the correct request details (schema https://
, host example.org
).
Laravel/Symfony automatically reads those headers, but only if the trusted proxy configuration is set to "trust" the load balancer/reverse proxy.
Note: Many of us use hosted load balancers/proxies such as AWS ELB/ALB, etc. We don't know the IP address of those reverse proxies, and so you need to trusted all proxies in that case.
The trade-off there is running the security risk of allowing people to potentially spoof the
X-Forwarded
headers.
This Wiki page has a list of popular services and their IP addresses of their servers, if available. Any updates or suggestions are welcome!
Author: Fideloper
Source Code: https://github.com/fideloper/TrustedProxy
License: MIT license
1595201363
First thing, we will need a table and i am creating products table for this example. So run the following query to create table.
CREATE TABLE `products` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
`description` varchar(255) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`created_at` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
`updated_at` datetime DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=7 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
Next, we will need to insert some dummy records in this table that will be deleted.
INSERT INTO `products` (`name`, `description`) VALUES
('Test product 1', 'Product description example1'),
('Test product 2', 'Product description example2'),
('Test product 3', 'Product description example3'),
('Test product 4', 'Product description example4'),
('Test product 5', 'Product description example5');
Now we are redy to create a model corresponding to this products table. Here we will create Product model. So let’s create a model file Product.php file under app directory and put the code below.
<?php
namespace App;
use Illuminate\Database\Eloquent\Model;
class Product extends Model
{
protected $fillable = [
'name','description'
];
}
Now, in this second step we will create some routes to handle the request for this example. So opeen routes/web.php file and copy the routes as given below.
routes/web.php
Route::get('product', 'ProductController@index');
Route::delete('product/{id}', ['as'=>'product.destroy','uses'=>'ProductController@destroy']);
Route::delete('delete-multiple-product', ['as'=>'product.multiple-delete','uses'=>'ProductController@deleteMultiple']);
#laravel #delete multiple rows in laravel using ajax #laravel ajax delete #laravel ajax multiple checkbox delete #laravel delete multiple rows #laravel delete records using ajax #laravel multiple checkbox delete rows #laravel multiple delete
1614840465
Laravel provides user authentication package to manage complete authentication like User Register, Login, Forgot Password, Email Verification. UI Auth…
You can create and manage authentication in Laravel 8 easily using inbuilt packages. User authentication is always the most important concern of any web application. If you want to handle the application functionalities and roles then it always requires a user module. On the basis of the user, you can manage the rights of access in the application. I already shared a post on one of the latest features of Laravel 8 for managing authentication using Jetstream and Livewire. In this post, I will show you how you can create authentication without using Jetstream. I will be going to use the Laravel UI package. Here, I will be starting with a new project in Laravel 8. So, let’s start.
#laravel 8 #auth package in laravel #laravel auth #ui auth in laravel #ui vue auth in laravel #user authentication in laravel
1595216280
Some times there are cases where you want to get relationship from relationship in Laravel, that can be achieved via following:
Account::with(['profiles','profiles.products'])->get();
Sometimes you want to apply multiple where clauses in a single query like that:
$results = User::where('this', '=', 1)
->where('that', '=', 1)
->where('this_too', '=', 1)
->where('that_too', '=', 1)
->where('this_as_well', '=', 1)
->where('that_as_well', '=', 1)
->where('this_one_too', '=', 1)
->where('that_one_too', '=', 1)
->where('this_one_as_well', '=', 1)
->where('that_one_as_well', '=', 1)
->get();
This can be achieved using single where clause by following code:
$query->where([
['column_1', '=', 'value_1'],
['column_2', '<>', 'value_2'],
[COLUMN, OPERATOR, VALUE]
])
The are situations, during bulk insertion or single row insertion you may want to retrieve last inserted id, can be achieved by following:
$post = Input::All();
$data = new Company;
$data->nombre = $post['name'];
$data->direccion = $post['address'];
$data->telefono = $post['phone'];
$data->email = $post['email'];
$data->giro = $post['type'];
$data->fecha_registro = date("Y-m-d H:i:s");
$data->fecha_modificacion = date("Y-m-d H:i:s");
$data->save();
//getting last inserted id
$data->id;
Sometimes you want to add a custom attribute in model during load, this can be achieved by following code:
class Book extends Eloquent {
protected $table = 'books';
public function toArray()
{
$array = parent::toArray();
$array['upper'] = $this->upper;
return $array;
}
public function getUpperAttribute()
{
return strtoupper($this->title);
}
}
#laravel #php #how to add a custom attribute in laravel during model load #how to create multiple where clauses in single where clause in laravel #how to get a specific column using with() in laravel #how to get last inserted id in laravel #how to get relationship from relationship laravel #laravelfaq