Awesome  Rust

Awesome Rust

1658072460

Dylint - A tool for Loading Rust Linting Rules From Dynamic Libraries

This blog post introduces Dylint, a tool for loading Rust linting rules (or “lints”) from dynamic libraries. Dylint makes it easy for developers to maintain their own personal lint collections.
 

Previously, the simplest way to write a new Rust lint was to fork Clippy, Rust’s de facto linting tool. But this approach has drawbacks in how one runs and maintains new lints. Dylint minimizes these distractions so that developers can focus on actually writing lints.
 

First, we'll go over the current state of Rust linting and the workings of Clippy. Then, we'll explain how Dylint improves upon the status quo and offer some tips on how to begin using it. Skip to the last section, If you want to get straight to writing lints.

Rust linting and Clippy

Tools like Clippy take advantage of the Rust compiler's dedicated support for linting. A Rust linter’s core component, called a “driver,” links against an appropriately named library, rustc_driver. By doing so, the driver essentially becomes a wrapper around the Rust compiler.
 

To run the linter, the RUSTC_WORKSPACE_WRAPPER environment variable is set to point to the driver and runs cargo check. Cargo notices that the environment variable has been set and calls the driver instead of calling rustc. When the driver is called, it sets a callback in the Rust compiler’s Config struct. The callback registers some number of lints, which the Rust compiler then runs alongside its built-in lints.
 

Clippy performs a few checks to ensure it is enabled but otherwise works in the above manner. (See Figure 1 for Clippy’s architecture.) Although it may not be immediately clear upon installation, Clippy is actually two binaries: a Cargo command and a rustc driver. You can verify this by typing the following:

which cargo-clippy
which clippy-driver


 

Figure 1: Clippy’s architecture

Now suppose you want to write your own lints. What should you do? Well, you’ll need a driver to run them, and Clippy has a driver, so forking Clippy seems like a reasonable step to take. But there are drawbacks to this solution, namely in running and maintaining the lints that you’ll develop.
 

First, your fork will have its own copies of the two binaries, and it’s a hassle to ensure that they can be found. You’ll have to make sure that at least the Cargo command is in your PATH, and you’ll probably have to rename the binaries so that they won’t interfere with Clippy. Clearly, these steps don’t pose insurmountable problems, but you would probably rather avoid them.
 

Second, all lints (including Clippy’s) are built upon unstable compiler APIs. Lints compiled together must use the same version of those APIs. To understand why this is an issue, we’ll refer to clippy_utils—a collection of utilities that the Clippy authors have generously made public. Note that clippy_utils uses the same compiler APIs that lints do, and similarly provides no stability guarantees. (See below.)
 

Suppose you have a fork of Clippy to which you want to add a new lint. Clearly, you’ll want your new lint to use the most recent version of clippy_utils. But suppose that version uses compiler version B, while your fork of Clippy uses compiler version A. Then you’ll be faced with a dilemma: Should you use an older version of clippy_utils (one that uses compiler version A), or should you upgrade all of the lints in your fork to use compiler version B? Neither is a desirable choice.
 

Dylint addresses both of these problems. First, it provides a single Cargo command, saving you from having to manage multiple such commands. Second, for Dylint, lints are compiled together to produce dynamic libraries. So in the above situation, you could simply store your new lint in a new dynamic library that uses compiler version B. You could use this new library alongside your existing libraries for as long as you’d like and upgrade your existing libraries to the newer compiler version if you so choose.
 

Dylint provides an additional benefit related to reusing intermediate compilation results. To understand it, we need to examine how Dylint works.

How Dylint works

Like Clippy, Dylint provides a Cargo command. The user specifies to that command the dynamic libraries from which the user wants to load lints. Dylint runs cargo check in a way that ensures the lints are registered before control is handed over to the Rust compiler.
 

The lint-registration process is more complicated for Dylint than for Clippy, however. All of Clippy’s lints use the same compiler version, so only one driver is needed. But a Dylint user could choose to load lints from libraries that use different compiler versions.
 

Dylint handles such situations by building new drivers on-the-fly as needed. In other words, if a user wants to load lints from a library that uses compiler version A and no driver can be found for compiler version A, Dylint will build a new one. Drivers are cached in the user’s home directory, so they are rebuilt only when necessary.
 

Figure 2: Dylint architecture

This brings us to the additional benefit alluded to in the previous section. Dylint groups libraries by the compiler version they use. Libraries that use the same compiler version are loaded together, and their lints are run together. This allows intermediate compilation results (e.g., symbol resolution, type checking, trait solving, etc.) to be shared among the lints.
 

For example, in Figure 2, if libraries U and V both used compiler version A, the libraries would be grouped together. The driver for compiler version A would be invoked only once. The driver would register the lints in libraries U and V before handing control over to the Rust compiler.
 

To understand why this approach is beneficial, consider the following. Suppose that lints were stored directly in compiler drivers rather than dynamic libraries, and recall that a driver is essentially a wrapper around the Rust compiler. So if one had two lints in two compiler drivers that used the same compiler version, running those two drivers on the same code would amount to compiling that code twice. By storing lints in dynamic libraries and grouping them by compiler version, Dylint avoids these inefficiencies.

An application: Project-specific lints

Did you know that Clippy contains lints whose sole purpose is to lint Clippy’s code? It’s true. Clippy contains lints to check, for example, that every lint has an associated LintPass, that certain Clippy wrapper functions are used instead of the functions they wrap, and that every lint has a non-default description. It wouldn’t make sense to apply these lints to code other than Clippy’s. But there’s no rule that all lints must be general purpose, and Clippy takes advantage of this liberty.
 

Dylint similarly includes lints whose primary purpose is to lint Dylint’s code. For example, while developing Dylint, we found ourselves writing code like the following:

let rustup_toolchain = std::env::var("RUSTUP_TOOLCHAIN")?;
...
std::env::remove_var("RUSTUP_TOOLCHAIN");

This was bad practice. Why? Because it was only a matter of time until we fat-fingered the string literal:

std::env::remove_var("RUSTUP_TOOLCHIAN"); // Oops

A better approach is to use a constant instead of a string literal, as in the below code:

const RUSTUP_TOOLCHAIN: &str = "RUSTUP_TOOLCHAIN";
...
std::env::remove_var(RUSTUP_TOOLCHAIN);

So while working on Dylint, we wrote a lint to check for this bad practice and to make an appropriate suggestion. We applied (and still apply) that lint to the Dylint source code. The lint is called env_literal and the core of its current implementation is as follows:

impl<'tcx> LateLintPass<'tcx> for EnvLiteral {
   fn check_expr(&mut self, cx: &LateContext<'tcx>, expr: &Expr<'_>) {
      if_chain! {
         if let ExprKind::Call(callee, args) = expr.kind;
         if is_expr_path_def_path(cx, callee, &REMOVE_VAR)
            || is_expr_path_def_path(cx, callee, &SET_VAR)
            || is_expr_path_def_path(cx, callee, &VAR);
         if !args.is_empty();
         if let ExprKind::Lit(lit) = &args[0].kind;
         if let LitKind::Str(symbol, _) = lit.node;
         let ident = symbol.to_ident_string();
         if is_upper_snake_case(&ident);
         then {
            span_lint_and_help(
               cx,
               ENV_LITERAL,
               args[0].span,
               "referring to an environment variable with a string literal is error prone",
               None,
               &format!("define a constant `{}` and use that instead", ident),
            );
         }
      }
   }
}

Here is an example of a warning it could produce:

warning: referring to an environment variable with a string literal is error prone
 --> src/main.rs:2:27
  |
2 |     let _ = std::env::var("RUSTFLAGS");
  |                           ^^^^^^^^^^^
  |
  = note: `#[warn(env_literal)]` on by default
  = help: define a constant `RUSTFLAGS` and use that instead

Recall that neither the compiler nor clippy_utils provides stability guarantees for its APIs, so future versions of env_literal may look slightly different. (In fact, a change to clippy_utils‘ APIs resulted in a change env_literal's implementation while this article was being written!) The current version of env_literal can always be found in the examples directory of the Dylint repository.

Clippy “lints itself” in a slightly different way than Dylint, however. Clippy’s internal lints are compiled into a version of Clippy with a particular feature enabled. But for Dylint, the env_literal lint is compiled into a dynamic library. Thus, env_literal is not a part of Dylint. It’s essentially input.

Why is this important? Because you can write custom lints for your project and use Dylint to run them just as Dylint runs its own lints on itself. There’s nothing significant about the source of the lints that Dylint runs on the Dylint repository. Dylint would just as readily run your repository’s lints on your repository.

The bottom line is this: If you find yourself writing code you do not like and you can detect that code with a lint, Dylint can help you weed out that code and prevent its reintroduction.

Get to linting

Install Dylint with the following command:

cargo install cargo-dylint

We also recommend installing the dylint-link tool to facilitate linking:

cargo install dylint-link

The easiest way to write a Dylint library is to fork the dylint-template repository. The repository produces a loadable library right out of the box. You can verify this as follows:

git clone https://github.com/trailofbits/dylint-template
cd dylint-template
cargo build
DYLINT_LIBRARY_PATH=$PWD/target/debug cargo dylint fill_me_in --list

All you have to do is implement the LateLintPass trait and accommodate the symbols asking to be filled in.

Helpful resources for writing lints include the following:
 

Also consider using the clippy_utils crate mentioned above. It includes functions for many low-level tasks, such as looking up symbols and printing diagnostic messages, and makes writing lints significantly easier.

We owe a sincere thanks to the Clippy authors for making the clippy_utils crate available to the Rust community. We would also like to thank Philipp Krones for providing helpful comments on an earlier version of this post.

Link: https://www.trailofbits.com/post/write-rust-lints-without-forking-clippy

#rust #rustlang #staticanalysis

What is GEEK

Buddha Community

Dylint - A tool for Loading Rust Linting Rules From Dynamic Libraries

Dotnet Script: Run C# Scripts From The .NET CLI

dotnet script

Run C# scripts from the .NET CLI, define NuGet packages inline and edit/debug them in VS Code - all of that with full language services support from OmniSharp.

NuGet Packages

NameVersionFramework(s)
dotnet-script (global tool)Nugetnet6.0, net5.0, netcoreapp3.1
Dotnet.Script (CLI as Nuget)Nugetnet6.0, net5.0, netcoreapp3.1
Dotnet.Script.CoreNugetnetcoreapp3.1 , netstandard2.0
Dotnet.Script.DependencyModelNugetnetstandard2.0
Dotnet.Script.DependencyModel.NugetNugetnetstandard2.0

Installing

Prerequisites

The only thing we need to install is .NET Core 3.1 or .NET 5.0 SDK.

.NET Core Global Tool

.NET Core 2.1 introduced the concept of global tools meaning that you can install dotnet-script using nothing but the .NET CLI.

dotnet tool install -g dotnet-script

You can invoke the tool using the following command: dotnet-script
Tool 'dotnet-script' (version '0.22.0') was successfully installed.

The advantage of this approach is that you can use the same command for installation across all platforms. .NET Core SDK also supports viewing a list of installed tools and their uninstallation.

dotnet tool list -g

Package Id         Version      Commands
---------------------------------------------
dotnet-script      0.22.0       dotnet-script
dotnet tool uninstall dotnet-script -g

Tool 'dotnet-script' (version '0.22.0') was successfully uninstalled.

Windows

choco install dotnet.script

We also provide a PowerShell script for installation.

(new-object Net.WebClient).DownloadString("https://raw.githubusercontent.com/filipw/dotnet-script/master/install/install.ps1") | iex

Linux and Mac

curl -s https://raw.githubusercontent.com/filipw/dotnet-script/master/install/install.sh | bash

If permission is denied we can try with sudo

curl -s https://raw.githubusercontent.com/filipw/dotnet-script/master/install/install.sh | sudo bash

Docker

A Dockerfile for running dotnet-script in a Linux container is available. Build:

cd build
docker build -t dotnet-script -f Dockerfile ..

And run:

docker run -it dotnet-script --version

Github

You can manually download all the releases in zip format from the GitHub releases page.

Usage

Our typical helloworld.csx might look like this:

Console.WriteLine("Hello world!");

That is all it takes and we can execute the script. Args are accessible via the global Args array.

dotnet script helloworld.csx

Scaffolding

Simply create a folder somewhere on your system and issue the following command.

dotnet script init

This will create main.csx along with the launch configuration needed to debug the script in VS Code.

.
├── .vscode
│   └── launch.json
├── main.csx
└── omnisharp.json

We can also initialize a folder using a custom filename.

dotnet script init custom.csx

Instead of main.csx which is the default, we now have a file named custom.csx.

.
├── .vscode
│   └── launch.json
├── custom.csx
└── omnisharp.json

Note: Executing dotnet script init inside a folder that already contains one or more script files will not create the main.csx file.

Running scripts

Scripts can be executed directly from the shell as if they were executables.

foo.csx arg1 arg2 arg3

OSX/Linux

Just like all scripts, on OSX/Linux you need to have a #! and mark the file as executable via chmod +x foo.csx. If you use dotnet script init to create your csx it will automatically have the #! directive and be marked as executable.

The OSX/Linux shebang directive should be #!/usr/bin/env dotnet-script

#!/usr/bin/env dotnet-script
Console.WriteLine("Hello world");

You can execute your script using dotnet script or dotnet-script, which allows you to pass arguments to control your script execution more.

foo.csx arg1 arg2 arg3
dotnet script foo.csx -- arg1 arg2 arg3
dotnet-script foo.csx -- arg1 arg2 arg3

Passing arguments to scripts

All arguments after -- are passed to the script in the following way:

dotnet script foo.csx -- arg1 arg2 arg3

Then you can access the arguments in the script context using the global Args collection:

foreach (var arg in Args)
{
    Console.WriteLine(arg);
}

All arguments before -- are processed by dotnet script. For example, the following command-line

dotnet script -d foo.csx -- -d

will pass the -d before -- to dotnet script and enable the debug mode whereas the -d after -- is passed to script for its own interpretation of the argument.

NuGet Packages

dotnet script has built-in support for referencing NuGet packages directly from within the script.

#r "nuget: AutoMapper, 6.1.0"

package

Note: Omnisharp needs to be restarted after adding a new package reference

Package Sources

We can define package sources using a NuGet.Config file in the script root folder. In addition to being used during execution of the script, it will also be used by OmniSharp that provides language services for packages resolved from these package sources.

As an alternative to maintaining a local NuGet.Config file we can define these package sources globally either at the user level or at the computer level as described in Configuring NuGet Behaviour

It is also possible to specify packages sources when executing the script.

dotnet script foo.csx -s https://SomePackageSource

Multiple packages sources can be specified like this:

dotnet script foo.csx -s https://SomePackageSource -s https://AnotherPackageSource

Creating DLLs or Exes from a CSX file

Dotnet-Script can create a standalone executable or DLL for your script.

SwitchLong switchdescription
-o--outputDirectory where the published executable should be placed. Defaults to a 'publish' folder in the current directory.
-n--nameThe name for the generated DLL (executable not supported at this time). Defaults to the name of the script.
 --dllPublish to a .dll instead of an executable.
-c--configurationConfiguration to use for publishing the script [Release/Debug]. Default is "Debug"
-d--debugEnables debug output.
-r--runtimeThe runtime used when publishing the self contained executable. Defaults to your current runtime.

The executable you can run directly independent of dotnet install, while the DLL can be run using the dotnet CLI like this:

dotnet script exec {path_to_dll} -- arg1 arg2

Caching

We provide two types of caching, the dependency cache and the execution cache which is explained in detail below. In order for any of these caches to be enabled, it is required that all NuGet package references are specified using an exact version number. The reason for this constraint is that we need to make sure that we don't execute a script with a stale dependency graph.

Dependency Cache

In order to resolve the dependencies for a script, a dotnet restore is executed under the hood to produce a project.assets.json file from which we can figure out all the dependencies we need to add to the compilation. This is an out-of-process operation and represents a significant overhead to the script execution. So this cache works by looking at all the dependencies specified in the script(s) either in the form of NuGet package references or assembly file references. If these dependencies matches the dependencies from the last script execution, we skip the restore and read the dependencies from the already generated project.assets.json file. If any of the dependencies has changed, we must restore again to obtain the new dependency graph.

Execution cache

In order to execute a script it needs to be compiled first and since that is a CPU and time consuming operation, we make sure that we only compile when the source code has changed. This works by creating a SHA256 hash from all the script files involved in the execution. This hash is written to a temporary location along with the DLL that represents the result of the script compilation. When a script is executed the hash is computed and compared with the hash from the previous compilation. If they match there is no need to recompile and we run from the already compiled DLL. If the hashes don't match, the cache is invalidated and we recompile.

You can override this automatic caching by passing --no-cache flag, which will bypass both caches and cause dependency resolution and script compilation to happen every time we execute the script.

Cache Location

The temporary location used for caches is a sub-directory named dotnet-script under (in order of priority):

  1. The path specified for the value of the environment variable named DOTNET_SCRIPT_CACHE_LOCATION, if defined and value is not empty.
  2. Linux distributions only: $XDG_CACHE_HOME if defined otherwise $HOME/.cache
  3. macOS only: ~/Library/Caches
  4. The value returned by Path.GetTempPath for the platform.

 

Debugging

The days of debugging scripts using Console.WriteLine are over. One major feature of dotnet script is the ability to debug scripts directly in VS Code. Just set a breakpoint anywhere in your script file(s) and hit F5(start debugging)

debug

Script Packages

Script packages are a way of organizing reusable scripts into NuGet packages that can be consumed by other scripts. This means that we now can leverage scripting infrastructure without the need for any kind of bootstrapping.

Creating a script package

A script package is just a regular NuGet package that contains script files inside the content or contentFiles folder.

The following example shows how the scripts are laid out inside the NuGet package according to the standard convention .

└── contentFiles
    └── csx
        └── netstandard2.0
            └── main.csx

This example contains just the main.csx file in the root folder, but packages may have multiple script files either in the root folder or in subfolders below the root folder.

When loading a script package we will look for an entry point script to be loaded. This entry point script is identified by one of the following.

  • A script called main.csx in the root folder
  • A single script file in the root folder

If the entry point script cannot be determined, we will simply load all the scripts files in the package.

The advantage with using an entry point script is that we can control loading other scripts from the package.

Consuming a script package

To consume a script package all we need to do specify the NuGet package in the #loaddirective.

The following example loads the simple-targets package that contains script files to be included in our script.

#load "nuget:simple-targets-csx, 6.0.0"

using static SimpleTargets;
var targets = new TargetDictionary();

targets.Add("default", () => Console.WriteLine("Hello, world!"));

Run(Args, targets);

Note: Debugging also works for script packages so that we can easily step into the scripts that are brought in using the #load directive.

Remote Scripts

Scripts don't actually have to exist locally on the machine. We can also execute scripts that are made available on an http(s) endpoint.

This means that we can create a Gist on Github and execute it just by providing the URL to the Gist.

This Gist contains a script that prints out "Hello World"

We can execute the script like this

dotnet script https://gist.githubusercontent.com/seesharper/5d6859509ea8364a1fdf66bbf5b7923d/raw/0a32bac2c3ea807f9379a38e251d93e39c8131cb/HelloWorld.csx

That is a pretty long URL, so why don't make it a TinyURL like this:

dotnet script https://tinyurl.com/y8cda9zt

Script Location

A pretty common scenario is that we have logic that is relative to the script path. We don't want to require the user to be in a certain directory for these paths to resolve correctly so here is how to provide the script path and the script folder regardless of the current working directory.

public static string GetScriptPath([CallerFilePath] string path = null) => path;
public static string GetScriptFolder([CallerFilePath] string path = null) => Path.GetDirectoryName(path);

Tip: Put these methods as top level methods in a separate script file and #load that file wherever access to the script path and/or folder is needed.

REPL

This release contains a C# REPL (Read-Evaluate-Print-Loop). The REPL mode ("interactive mode") is started by executing dotnet-script without any arguments.

The interactive mode allows you to supply individual C# code blocks and have them executed as soon as you press Enter. The REPL is configured with the same default set of assembly references and using statements as regular CSX script execution.

Basic usage

Once dotnet-script starts you will see a prompt for input. You can start typing C# code there.

~$ dotnet script
> var x = 1;
> x+x
2

If you submit an unterminated expression into the REPL (no ; at the end), it will be evaluated and the result will be serialized using a formatter and printed in the output. This is a bit more interesting than just calling ToString() on the object, because it attempts to capture the actual structure of the object. For example:

~$ dotnet script
> var x = new List<string>();
> x.Add("foo");
> x
List<string>(1) { "foo" }
> x.Add("bar");
> x
List<string>(2) { "foo", "bar" }
>

Inline Nuget packages

REPL also supports inline Nuget packages - meaning the Nuget packages can be installed into the REPL from within the REPL. This is done via our #r and #load from Nuget support and uses identical syntax.

~$ dotnet script
> #r "nuget: Automapper, 6.1.1"
> using AutoMapper;
> typeof(MapperConfiguration)
[AutoMapper.MapperConfiguration]
> #load "nuget: simple-targets-csx, 6.0.0";
> using static SimpleTargets;
> typeof(TargetDictionary)
[Submission#0+SimpleTargets+TargetDictionary]

Multiline mode

Using Roslyn syntax parsing, we also support multiline REPL mode. This means that if you have an uncompleted code block and press Enter, we will automatically enter the multiline mode. The mode is indicated by the * character. This is particularly useful for declaring classes and other more complex constructs.

~$ dotnet script
> class Foo {
* public string Bar {get; set;}
* }
> var foo = new Foo();

REPL commands

Aside from the regular C# script code, you can invoke the following commands (directives) from within the REPL:

CommandDescription
#loadLoad a script into the REPL (same as #load usage in CSX)
#rLoad an assembly into the REPL (same as #r usage in CSX)
#resetReset the REPL back to initial state (without restarting it)
#clsClear the console screen without resetting the REPL state
#exitExits the REPL

Seeding REPL with a script

You can execute a CSX script and, at the end of it, drop yourself into the context of the REPL. This way, the REPL becomes "seeded" with your code - all the classes, methods or variables are available in the REPL context. This is achieved by running a script with an -i flag.

For example, given the following CSX script:

var msg = "Hello World";
Console.WriteLine(msg);

When you run this with the -i flag, Hello World is printed, REPL starts and msg variable is available in the REPL context.

~$ dotnet script foo.csx -i
Hello World
>

You can also seed the REPL from inside the REPL - at any point - by invoking a #load directive pointed at a specific file. For example:

~$ dotnet script
> #load "foo.csx"
Hello World
>

Piping

The following example shows how we can pipe data in and out of a script.

The UpperCase.csx script simply converts the standard input to upper case and writes it back out to standard output.

using (var streamReader = new StreamReader(Console.OpenStandardInput()))
{
    Write(streamReader.ReadToEnd().ToUpper());
}

We can now simply pipe the output from one command into our script like this.

echo "This is some text" | dotnet script UpperCase.csx
THIS IS SOME TEXT

Debugging

The first thing we need to do add the following to the launch.config file that allows VS Code to debug a running process.

{
    "name": ".NET Core Attach",
    "type": "coreclr",
    "request": "attach",
    "processId": "${command:pickProcess}"
}

To debug this script we need a way to attach the debugger in VS Code and the simplest thing we can do here is to wait for the debugger to attach by adding this method somewhere.

public static void WaitForDebugger()
{
    Console.WriteLine("Attach Debugger (VS Code)");
    while(!Debugger.IsAttached)
    {
    }
}

To debug the script when executing it from the command line we can do something like

WaitForDebugger();
using (var streamReader = new StreamReader(Console.OpenStandardInput()))
{
    Write(streamReader.ReadToEnd().ToUpper()); // <- SET BREAKPOINT HERE
}

Now when we run the script from the command line we will get

$ echo "This is some text" | dotnet script UpperCase.csx
Attach Debugger (VS Code)

This now gives us a chance to attach the debugger before stepping into the script and from VS Code, select the .NET Core Attach debugger and pick the process that represents the executing script.

Once that is done we should see our breakpoint being hit.

Configuration(Debug/Release)

By default, scripts will be compiled using the debug configuration. This is to ensure that we can debug a script in VS Code as well as attaching a debugger for long running scripts.

There are however situations where we might need to execute a script that is compiled with the release configuration. For instance, running benchmarks using BenchmarkDotNet is not possible unless the script is compiled with the release configuration.

We can specify this when executing the script.

dotnet script foo.csx -c release

 

Nullable reference types

Starting from version 0.50.0, dotnet-script supports .Net Core 3.0 and all the C# 8 features. The way we deal with nullable references types in dotnet-script is that we turn every warning related to nullable reference types into compiler errors. This means every warning between CS8600 and CS8655 are treated as an error when compiling the script.

Nullable references types are turned off by default and the way we enable it is using the #nullable enable compiler directive. This means that existing scripts will continue to work, but we can now opt-in on this new feature.

#!/usr/bin/env dotnet-script

#nullable enable

string name = null;

Trying to execute the script will result in the following error

main.csx(5,15): error CS8625: Cannot convert null literal to non-nullable reference type.

We will also see this when working with scripts in VS Code under the problems panel.

image

Download Details:
Author: filipw
Source Code: https://github.com/filipw/dotnet-script
License: MIT License

#dotnet  #aspdotnet  #csharp 

Awesome  Rust

Awesome Rust

1658072460

Dylint - A tool for Loading Rust Linting Rules From Dynamic Libraries

This blog post introduces Dylint, a tool for loading Rust linting rules (or “lints”) from dynamic libraries. Dylint makes it easy for developers to maintain their own personal lint collections.
 

Previously, the simplest way to write a new Rust lint was to fork Clippy, Rust’s de facto linting tool. But this approach has drawbacks in how one runs and maintains new lints. Dylint minimizes these distractions so that developers can focus on actually writing lints.
 

First, we'll go over the current state of Rust linting and the workings of Clippy. Then, we'll explain how Dylint improves upon the status quo and offer some tips on how to begin using it. Skip to the last section, If you want to get straight to writing lints.

Rust linting and Clippy

Tools like Clippy take advantage of the Rust compiler's dedicated support for linting. A Rust linter’s core component, called a “driver,” links against an appropriately named library, rustc_driver. By doing so, the driver essentially becomes a wrapper around the Rust compiler.
 

To run the linter, the RUSTC_WORKSPACE_WRAPPER environment variable is set to point to the driver and runs cargo check. Cargo notices that the environment variable has been set and calls the driver instead of calling rustc. When the driver is called, it sets a callback in the Rust compiler’s Config struct. The callback registers some number of lints, which the Rust compiler then runs alongside its built-in lints.
 

Clippy performs a few checks to ensure it is enabled but otherwise works in the above manner. (See Figure 1 for Clippy’s architecture.) Although it may not be immediately clear upon installation, Clippy is actually two binaries: a Cargo command and a rustc driver. You can verify this by typing the following:

which cargo-clippy
which clippy-driver


 

Figure 1: Clippy’s architecture

Now suppose you want to write your own lints. What should you do? Well, you’ll need a driver to run them, and Clippy has a driver, so forking Clippy seems like a reasonable step to take. But there are drawbacks to this solution, namely in running and maintaining the lints that you’ll develop.
 

First, your fork will have its own copies of the two binaries, and it’s a hassle to ensure that they can be found. You’ll have to make sure that at least the Cargo command is in your PATH, and you’ll probably have to rename the binaries so that they won’t interfere with Clippy. Clearly, these steps don’t pose insurmountable problems, but you would probably rather avoid them.
 

Second, all lints (including Clippy’s) are built upon unstable compiler APIs. Lints compiled together must use the same version of those APIs. To understand why this is an issue, we’ll refer to clippy_utils—a collection of utilities that the Clippy authors have generously made public. Note that clippy_utils uses the same compiler APIs that lints do, and similarly provides no stability guarantees. (See below.)
 

Suppose you have a fork of Clippy to which you want to add a new lint. Clearly, you’ll want your new lint to use the most recent version of clippy_utils. But suppose that version uses compiler version B, while your fork of Clippy uses compiler version A. Then you’ll be faced with a dilemma: Should you use an older version of clippy_utils (one that uses compiler version A), or should you upgrade all of the lints in your fork to use compiler version B? Neither is a desirable choice.
 

Dylint addresses both of these problems. First, it provides a single Cargo command, saving you from having to manage multiple such commands. Second, for Dylint, lints are compiled together to produce dynamic libraries. So in the above situation, you could simply store your new lint in a new dynamic library that uses compiler version B. You could use this new library alongside your existing libraries for as long as you’d like and upgrade your existing libraries to the newer compiler version if you so choose.
 

Dylint provides an additional benefit related to reusing intermediate compilation results. To understand it, we need to examine how Dylint works.

How Dylint works

Like Clippy, Dylint provides a Cargo command. The user specifies to that command the dynamic libraries from which the user wants to load lints. Dylint runs cargo check in a way that ensures the lints are registered before control is handed over to the Rust compiler.
 

The lint-registration process is more complicated for Dylint than for Clippy, however. All of Clippy’s lints use the same compiler version, so only one driver is needed. But a Dylint user could choose to load lints from libraries that use different compiler versions.
 

Dylint handles such situations by building new drivers on-the-fly as needed. In other words, if a user wants to load lints from a library that uses compiler version A and no driver can be found for compiler version A, Dylint will build a new one. Drivers are cached in the user’s home directory, so they are rebuilt only when necessary.
 

Figure 2: Dylint architecture

This brings us to the additional benefit alluded to in the previous section. Dylint groups libraries by the compiler version they use. Libraries that use the same compiler version are loaded together, and their lints are run together. This allows intermediate compilation results (e.g., symbol resolution, type checking, trait solving, etc.) to be shared among the lints.
 

For example, in Figure 2, if libraries U and V both used compiler version A, the libraries would be grouped together. The driver for compiler version A would be invoked only once. The driver would register the lints in libraries U and V before handing control over to the Rust compiler.
 

To understand why this approach is beneficial, consider the following. Suppose that lints were stored directly in compiler drivers rather than dynamic libraries, and recall that a driver is essentially a wrapper around the Rust compiler. So if one had two lints in two compiler drivers that used the same compiler version, running those two drivers on the same code would amount to compiling that code twice. By storing lints in dynamic libraries and grouping them by compiler version, Dylint avoids these inefficiencies.

An application: Project-specific lints

Did you know that Clippy contains lints whose sole purpose is to lint Clippy’s code? It’s true. Clippy contains lints to check, for example, that every lint has an associated LintPass, that certain Clippy wrapper functions are used instead of the functions they wrap, and that every lint has a non-default description. It wouldn’t make sense to apply these lints to code other than Clippy’s. But there’s no rule that all lints must be general purpose, and Clippy takes advantage of this liberty.
 

Dylint similarly includes lints whose primary purpose is to lint Dylint’s code. For example, while developing Dylint, we found ourselves writing code like the following:

let rustup_toolchain = std::env::var("RUSTUP_TOOLCHAIN")?;
...
std::env::remove_var("RUSTUP_TOOLCHAIN");

This was bad practice. Why? Because it was only a matter of time until we fat-fingered the string literal:

std::env::remove_var("RUSTUP_TOOLCHIAN"); // Oops

A better approach is to use a constant instead of a string literal, as in the below code:

const RUSTUP_TOOLCHAIN: &str = "RUSTUP_TOOLCHAIN";
...
std::env::remove_var(RUSTUP_TOOLCHAIN);

So while working on Dylint, we wrote a lint to check for this bad practice and to make an appropriate suggestion. We applied (and still apply) that lint to the Dylint source code. The lint is called env_literal and the core of its current implementation is as follows:

impl<'tcx> LateLintPass<'tcx> for EnvLiteral {
   fn check_expr(&mut self, cx: &LateContext<'tcx>, expr: &Expr<'_>) {
      if_chain! {
         if let ExprKind::Call(callee, args) = expr.kind;
         if is_expr_path_def_path(cx, callee, &REMOVE_VAR)
            || is_expr_path_def_path(cx, callee, &SET_VAR)
            || is_expr_path_def_path(cx, callee, &VAR);
         if !args.is_empty();
         if let ExprKind::Lit(lit) = &args[0].kind;
         if let LitKind::Str(symbol, _) = lit.node;
         let ident = symbol.to_ident_string();
         if is_upper_snake_case(&ident);
         then {
            span_lint_and_help(
               cx,
               ENV_LITERAL,
               args[0].span,
               "referring to an environment variable with a string literal is error prone",
               None,
               &format!("define a constant `{}` and use that instead", ident),
            );
         }
      }
   }
}

Here is an example of a warning it could produce:

warning: referring to an environment variable with a string literal is error prone
 --> src/main.rs:2:27
  |
2 |     let _ = std::env::var("RUSTFLAGS");
  |                           ^^^^^^^^^^^
  |
  = note: `#[warn(env_literal)]` on by default
  = help: define a constant `RUSTFLAGS` and use that instead

Recall that neither the compiler nor clippy_utils provides stability guarantees for its APIs, so future versions of env_literal may look slightly different. (In fact, a change to clippy_utils‘ APIs resulted in a change env_literal's implementation while this article was being written!) The current version of env_literal can always be found in the examples directory of the Dylint repository.

Clippy “lints itself” in a slightly different way than Dylint, however. Clippy’s internal lints are compiled into a version of Clippy with a particular feature enabled. But for Dylint, the env_literal lint is compiled into a dynamic library. Thus, env_literal is not a part of Dylint. It’s essentially input.

Why is this important? Because you can write custom lints for your project and use Dylint to run them just as Dylint runs its own lints on itself. There’s nothing significant about the source of the lints that Dylint runs on the Dylint repository. Dylint would just as readily run your repository’s lints on your repository.

The bottom line is this: If you find yourself writing code you do not like and you can detect that code with a lint, Dylint can help you weed out that code and prevent its reintroduction.

Get to linting

Install Dylint with the following command:

cargo install cargo-dylint

We also recommend installing the dylint-link tool to facilitate linking:

cargo install dylint-link

The easiest way to write a Dylint library is to fork the dylint-template repository. The repository produces a loadable library right out of the box. You can verify this as follows:

git clone https://github.com/trailofbits/dylint-template
cd dylint-template
cargo build
DYLINT_LIBRARY_PATH=$PWD/target/debug cargo dylint fill_me_in --list

All you have to do is implement the LateLintPass trait and accommodate the symbols asking to be filled in.

Helpful resources for writing lints include the following:
 

Also consider using the clippy_utils crate mentioned above. It includes functions for many low-level tasks, such as looking up symbols and printing diagnostic messages, and makes writing lints significantly easier.

We owe a sincere thanks to the Clippy authors for making the clippy_utils crate available to the Rust community. We would also like to thank Philipp Krones for providing helpful comments on an earlier version of this post.

Link: https://www.trailofbits.com/post/write-rust-lints-without-forking-clippy

#rust #rustlang #staticanalysis

Serde Rust: Serialization Framework for Rust

Serde

*Serde is a framework for serializing and deserializing Rust data structures efficiently and generically.*

You may be looking for:

Serde in action

Click to show Cargo.toml. Run this code in the playground.

[dependencies]

# The core APIs, including the Serialize and Deserialize traits. Always
# required when using Serde. The "derive" feature is only required when
# using #[derive(Serialize, Deserialize)] to make Serde work with structs
# and enums defined in your crate.
serde = { version = "1.0", features = ["derive"] }

# Each data format lives in its own crate; the sample code below uses JSON
# but you may be using a different one.
serde_json = "1.0"

 

use serde::{Serialize, Deserialize};

#[derive(Serialize, Deserialize, Debug)]
struct Point {
    x: i32,
    y: i32,
}

fn main() {
    let point = Point { x: 1, y: 2 };

    // Convert the Point to a JSON string.
    let serialized = serde_json::to_string(&point).unwrap();

    // Prints serialized = {"x":1,"y":2}
    println!("serialized = {}", serialized);

    // Convert the JSON string back to a Point.
    let deserialized: Point = serde_json::from_str(&serialized).unwrap();

    // Prints deserialized = Point { x: 1, y: 2 }
    println!("deserialized = {:?}", deserialized);
}

Getting help

Serde is one of the most widely used Rust libraries so any place that Rustaceans congregate will be able to help you out. For chat, consider trying the #rust-questions or #rust-beginners channels of the unofficial community Discord (invite: https://discord.gg/rust-lang-community), the #rust-usage or #beginners channels of the official Rust Project Discord (invite: https://discord.gg/rust-lang), or the #general stream in Zulip. For asynchronous, consider the [rust] tag on StackOverflow, the /r/rust subreddit which has a pinned weekly easy questions post, or the Rust Discourse forum. It's acceptable to file a support issue in this repo but they tend not to get as many eyes as any of the above and may get closed without a response after some time.

Download Details:
Author: serde-rs
Source Code: https://github.com/serde-rs/serde
License: View license

#rust  #rustlang 

Annie  Emard

Annie Emard

1651658100

Dylint: A Tool for Running Rust Lints From Dynamic Libraries

Dylint

A tool for running Rust lints from dynamic libraries

cargo install cargo-dylint dylint-link

Dylint is a Rust linting tool, similar to Clippy. But whereas Clippy runs a predetermined, static set of lints, Dylint runs lints from user-specified, dynamic libraries. Thus, Dylint allows developers to maintain their own personal lint collections.

Note: cargo-dylint will not work correctly if installed with the --debug flag. If a debug build of cargo-dylint is needed, please build it from the cargo-dylint package within this repository.

Quick start

Running Dylint

The next three steps install Dylint and run all of this repository's example lints on a workspace:

Install cargo-dylint and dylint-link:

cargo install cargo-dylint dylint-link

Add the following to the workspace's Cargo.toml file:

[workspace.metadata.dylint]
libraries = [
    { git = "https://github.com/trailofbits/dylint", pattern = "examples/*" },
]

Run cargo-dylint:

cargo dylint --all --workspace

In the above example, the libraries are found via workspace metadata (see below).

Writing lints

You can start writing your own Dylint library by running cargo dylint --new new_lint_name. Doing so will produce a loadable library right out of the box. You can verify this as follows:

cargo dylint --new new_lint_name
cd new_lint_name
cargo build
DYLINT_LIBRARY_PATH=$PWD/target/debug cargo dylint new_lint_name --list

All you have to do is implement the LateLintPass trait and accommodate the symbols asking to be filled in.

Helpful resources for writing lints appear below.

How libraries are found

Dylint tries to run all lints in all libraries named on the command line. Dylint resolves names to libraries in the following three ways:

Via the DYLINT_LIBRARY_PATH environment variable. If DYLINT_LIBRARY_PATH is set when Dylint is started, Dylint treats it as a colon-separated list of paths, and searches each path for files with names of the form DLL_PREFIX LIBRARY_NAME '@' TOOLCHAIN DLL_SUFFIX (see Library requirements below). For each such file found, LIBRARY_NAME resolves to that file.

Via workspace metadata. If Dylint is started in a workspace, Dylint checks the workspace's Cargo.toml file for workspace.metadata.dylint.libraries (see Workspace metadata below). Dylint downloads and builds each listed entry, similar to how Cargo downloads and builds a dependency. The resulting target/release directories are searched and names are resolved in the manner described in 1 above.

By path. If a name does not resolve to a library via 1 or 2, it is treated as a path.

It is considered an error if a name used on the command line resolves to multiple libraries.

If --lib name is used, then name is is treated only as a library name, and not as a path.

If --path name is used, then name is is treated only as a path, and not as a library name.

If --all is used, Dylint runs all lints in all libraries discovered via 1 and 2 above.

Note: Earlier versions of Dylint searched the current package's target/debug and target/release directories for libraries. This feature has been removed.

Workspace metadata

A workspace can name the libraries it should be linted with in its Cargo.toml file. Specifically, a workspace's manifest can contain a TOML list under workspace.metadata.dylint.libraries. Each list entry must have the form of a Cargo git or path dependency, with the following differences:

  • There is no leading package name, i.e., no package =.
  • path entries can contain glob patterns, e.g., *.
  • Any entry can contain a pattern field whose value is a glob pattern. The pattern field indicates the subdirectories that contain Dylint libraries.

Dylint downloads and builds each entry, similar to how Cargo downloads and builds a dependency. The resulting target/release directories are searched for files with names of the form that Dylint recognizes (see Library requirements below).

As an example, if you include the following in your workspace's Cargo.toml file and run cargo dylint --all --workspace, Dylint will run all of this repository's example lints on your workspace:

[workspace.metadata.dylint]
libraries = [
    { git = "https://github.com/trailofbits/dylint", pattern = "examples/*" },
]

Library requirements

A Dylint library must satisfy four requirements. Note: Before trying to satisfy these explicitly, see Utilities below.

Have a filename of the form:

DLL_PREFIX LIBRARY_NAME '@' TOOLCHAIN DLL_SUFFIX

The following is a concrete example on Linux:

libquestion_mark_in_expression@nightly-2021-04-08-x86_64-unknown-linux-gnu.so

The filename components are as follows:

  • DLL_PREFIX and DLL_SUFFIX are OS-specific strings. For example, on Linux, they are lib and .so, respectively.
  • LIBRARY_NAME is a name chosen by the library's author.
  • TOOLCHAIN is the Rust toolchain for which the library is compiled, e.g., nightly-2021-04-08-x86_64-unknown-linux-gnu.

Export a dylint_version function:

extern "C" fn dylint_version() -> *mut std::os::raw::c_char

This function should return 0.1.0. This may change in future versions of Dylint.

Export a register_lints function:

fn register_lints(sess: &rustc_session::Session, lint_store: &mut rustc_lint::LintStore)

This is a function called by the Rust compiler. It is documented here.

Link against the rustc_driver dynamic library. This ensures the library uses Dylint's copies of the Rust compiler crates. This requirement can be satisfied by including the following declaration in your library's lib.rs file:

extern crate rustc_driver;

Dylint provides utilities to help meet the above requirements. If your library uses the dylint-link tool and the dylint_library! macro, then all you should have to do is implement the register_lints function.

Utilities

The following utilities can be helpful for writing Dylint libraries:

  • dylint-link is a wrapper around Rust's default linker (cc) that creates a copy of your library with a filename that Dylint recognizes.
  • dylint_library! is a macro that automatically defines the dylint_version function and adds the extern crate rustc_driver declaration.
  • ui_test is a function that can be used to test Dylint libraries. It provides convenient access to the compiletest_rs package.
  • clippy_utils is a collection of utilities to make writing lints easier. It is generously made public by the Rust Clippy Developers. Note that, like rustc, clippy_utils provides no stability guarantees for its APIs.

VS Code integration

Dylint results can be viewed in VS Code using rust-analyzer. To do so, add the following to your VS Code settings.json file:

    "rust-analyzer.checkOnSave.overrideCommand": [
        "cargo",
        "dylint",
        "--all",
        "--workspace",
        "--",
        "--all-targets",
        "--message-format=json"
    ]

If you want to use rust-analyzer inside a lint library, you need to add the following to your VS Code settings.json file:

    "rust-analyzer.rustcSource": "discover",

And add the following to the library's Cargo.toml file:

[package.metadata.rust-analyzer]
rustc_private = true

Limitations

To run a library's lints on a package, Dylint tries to build the package with the same toolchain used to build the library. So if a package requires a specific toolchain to build, Dylint may not be able to apply certain libraries to that package.

One way this problem can manifest itself is if you try to run one library's lints on the source code of another library. That is, if two libraries use different toolchains, they may not be applicable to each other.

Resources

Helpful resources for writing lints include the following:

Author: trailofbits
Source Code: https://github.com/trailofbits/dylint
License: View license

#rust 

Sunny  Kunde

Sunny Kunde

1597848060

Top 12 Most Used Tools By Developers In 2020

rameworks and libraries can be said as the fundamental building blocks when developers build software or applications. These tools help in opting out the repetitive tasks as well as reduce the amount of code that the developers need to write for a particular software.

Recently, the Stack Overflow Developer Survey 2020 surveyed nearly 65,000 developers, where they voted their go-to tools and libraries. Here, we list down the top 12 frameworks and libraries from the survey that are most used by developers around the globe in 2020.

(The libraries are listed according to their number of Stars in GitHub)

1| TensorFlow

**GitHub Stars: **147k

Rank: 5

**About: **Originally developed by researchers of Google Brain team, TensorFlow is an end-to-end open-source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, and community resources that lets researchers push the state-of-the-art research in ML. It allows developers to easily build and deploy ML-powered applications.

Know more here.

2| Flutter

**GitHub Stars: **98.3k

**Rank: **9

About: Created by Google, Flutter is a free and open-source software development kit (SDK) which enables fast user experiences for mobile, web and desktop from a single codebase. The SDK works with existing code and is used by developers and organisations around the world.


#opinions #developer tools #frameworks #java tools #libraries #most used tools by developers #python tools