1588379100
In this series, “Even More Python for Beginners - Data Tools”, we’re going to help you build your toolkit for getting into data science and machine learning using Python.
After uploading the data, the next step is to load it into memory in a DataFrame. Fortunately, the DataFrame offers the ability to quickly read and save data.
#python #machine-learning #data-science
1667776800
NAME
cluster-ssh-tools - a collection of cluster ssh tools
HISTORY
I often work on clusters of machines where I need to do the same operation to many hosts at the same time. I started out with regular shell loops:
for i in `seq -f '%02g' 1 20`
do
ssh root@hostname$i.dc.domain.com reboot
done
This worked fine for about half a day. Then I looked at DSH and simlar tools available and easy to find in 2007. Over the course of time I built up cl-run.pl and a couple copies like cl-rsync.pl and cl-psgrep.pl. It didn't take long and I split all the common bits out to a module and made it a bit more generic. Then the rest of the tools were pretty trivial to throw together as I needed them.
That's all to say, these tools work well for me but are not good examples of perl coding nor are they good for everybody.
SCALABILITY
I've had good luck using most of these tools on 300+ hosts at a time from a bastion host with 8G of RAM and plenty of available CPU cycles. Currently I run these all the time from a smallish Linux VM (2G RAM, 2 vcpus), my workstation, and my Macbook Air and haven't ever had a problem with performance.
SYNOPSIS
cl-run.pl # run a command or script
cl-rsync.pl # parallel rsync
cl-sendfile.pl # push a file out
cl-gatherfile.pl # pull a file in (sorted by hostname)
cl-ping.pl # ping hosts
cl-killall.pl # kill a process on hosts with a regular expression
cl-psgrep.pl # look for processes across the cluster
cl-netstat.pl # a distributed network I/O display
nssh.rb # ssh wrapper that sets screen title & other things
EXAMPLES
$> cat > ~/.dsh/machines.nosqldb-dev <<EOF
nosqldb-dev12.tobert.org
nosqldb-dev11.tobert.org
nosqldb-dev10.tobert.org
nosqldb-dev9.tobert.org
nosqldb-dev8.tobert.org
nosqldb-dev7.tobert.org
nosqldb-dev6.tobert.org
nosqldb-dev5.tobert.org
nosqldb-dev4.tobert.org
nosqldb-dev3.tobert.org
nosqldb-dev2.tobert.org
nosqldb-dev1.tobert.org
EOF
# set default list to save typing, I generally do not use this
$> ln -sf ~/.dsh/machines.nosqldb-dev ~/.dsh/machines.list
# set up a user account (cheezy example, assumes user@localhost has root@remotehost keys set up)
$> cl-run.pl --root -c "useradd -m tobert"
$> cl-rsync.pl --root -l ~/.ssh -r /home/tobert
$> cl-run.pl --root -c "chown -R tobert /home/tobert"
$> cl-run.pl --root -c "(grep -q '^tobert' /etc/sudoers) || echo 'tobert ALL=(ALL) NOPASSWD: ALL' >> /etc/sudoers"
$> cl-run.pl --list nosqldb-dev.pl -c "uname -a"
nosqldb-dev12.tobert.org: Linux ip-xx-xx-xx-xx 2.6.32-316-ec2 #31-Ubuntu SMP Wed May 18 14:10:36 UTC 2011 x86_64 GNU/Linux
nosqldb-dev1.tobert.org: Linux ip-xx-xx-xx-xx 2.6.32-316-ec2 #31-Ubuntu SMP Wed May 18 14:10:36 UTC 2011 x86_64 GNU/Linux
nosqldb-dev11.tobert.org: Linux ip-xx-xx-xx-xx 2.6.32-316-ec2 #31-Ubuntu SMP Wed May 18 14:10:36 UTC 2011 x86_64 GNU/Linux
nosqldb-dev5.tobert.org: Linux ip-xx-xx-xx-xx 2.6.32-316-ec2 #31-Ubuntu SMP Wed May 18 14:10:36 UTC 2011 x86_64 GNU/Linux
nosqldb-dev2.tobert.org: Linux ip-xx-xx-xx-xx 2.6.32-316-ec2 #31-Ubuntu SMP Wed May 18 14:10:36 UTC 2011 x86_64 GNU/Linux
nosqldb-dev4.tobert.org: Linux ip-xx-xx-xx-xx 2.6.32-316-ec2 #36-Ubuntu SMP Fri Jul 8 18:12:30 UTC 2011 x86_64 GNU/Linux
nosqldb-dev6.tobert.org: Linux ip-xx-xx-xx-xx 2.6.32-316-ec2 #36-Ubuntu SMP Fri Jul 8 18:12:30 UTC 2011 x86_64 GNU/Linux
nosqldb-dev10.tobert.org: Linux ip-xx-xx-xx-xx 2.6.32-316-ec2 #31-Ubuntu SMP Wed May 18 14:10:36 UTC 2011 x86_64 GNU/Linux
nosqldb-dev3.tobert.org: Linux ip-xx-xx-xx-xx 2.6.32-316-ec2 #36-Ubuntu SMP Fri Jul 8 18:12:30 UTC 2011 x86_64 GNU/Linux
nosqldb-dev8.tobert.org: Linux ip-xx-xx-xx-xx 2.6.32-316-ec2 #31-Ubuntu SMP Wed May 18 14:10:36 UTC 2011 x86_64 GNU/Linux
nosqldb-dev9.tobert.org: Linux ip-xx-xx-xx-xx 2.6.32-316-ec2 #31-Ubuntu SMP Wed May 18 14:10:36 UTC 2011 x86_64 GNU/Linux
nosqldb-dev7.tobert.org: Linux ip-xx-xx-xx-xx 2.6.32-316-ec2 #31-Ubuntu SMP Wed May 18 14:10:36 UTC 2011 x86_64 GNU/Linux
$> cl-run.pl -c "sudo nohup dd if=/dev/zero of=/dev/null bs=1M &"
$> cl-netstat.pl --list nosqldb-dev --device md3
hostname: eth0_total eth0_recv eth0_send read_iops write_iops 1min 5min 15min
--------------------------------------------------------------------------------------------------------------
nosqldb-dev12: 864 121 743 0/s 0/s 0.00 0.00 0.00
nosqldb-dev11: 864 121 743 0/s 0/s 0.00 0.00 0.00
nosqldb-dev10: 846 121 725 0/s 0/s 0.00 0.00 0.00
nosqldb-dev9: 840 128 712 0/s 111,608/s 1.00 1.01 1.00
nosqldb-dev8: 827 117 710 0/s 120,828/s 1.04 1.03 1.00
nosqldb-dev7: 999 175 824 0/s 93,479/s 1.02 1.03 1.00
nosqldb-dev6: 1,674 201 1,473 0/s 0/s 0.00 0.00 0.00
nosqldb-dev5: 947 136 811 0/s 0/s 0.00 0.00 0.00
nosqldb-dev4: 1,674 201 1,473 0/s 0/s 0.00 0.00 0.00
nosqldb-dev3: 961 136 825 0/s 0/s 0.00 0.00 0.00
nosqldb-dev2: 961 136 825 0/s 0/s 0.00 0.00 0.00
nosqldb-dev1: 967 136 831 0/s 0/s 0.00 0.00 0.00
Total: 12,424 Recv: 1,729 Send: 10,695 (0 mbit/s) | 0 read/s 325,915 write/s
Average: 12,828 Recv: 151 Send: 917 (0 mbit/s) | 0 read/s 129,660 write/s
nssh.rb does a few nice things around sshing inside GNU screen. In the original version, all it did was set the screen title automatically by grabbing the hostname off the args. Now it does quite a bit more, including letting you create a bunch of new named sessions in screen without a lot of typing.
It also tries to flip CNAME's to A names automatically while still setting your screen title to the CNAME. This can be pretty handy when working with lots of EC2 hosts where you may not necessarily have set up all the CNAME's in ~/.ssh/config.
By default, GNU screen has MAXWIN at 40. I almost always run a rebuilt version from the git head with MAXWIN 512.
$> nssh.rb hostname.tobert.org
# in screen, ctrl-a c, then
$> nssh.rb reset
$> nssh.rb next --list nosqldb-dev
# ctrl-a c
$> nssh.rb next --list nosqldb-dev
# ctrl-a c
# etc. ...
And finally, the latest incarnation of my screenrc generation script is included. At the moment, I hard-code a list of clusters I want to connect to at screen startup so I can do something like the following after reboots:
$> ssh-add
$> generate-screen-config.rb
$> screen -c ~/.screenrc-main -S main -T xterm-color -U
This repo includes .screenrc-main based on what I use all the time.
REQUIREMENTS
Base perl with Tie::IxHash for most of the tools. cl-netstat.pl requires Net::SSH2 built against a fairly modern libssh2. The ruby utils are probably fine with a base system ruby 1.8 or 1.9.
SSH agent support requires Net::SSH2 >= 0.40.
INSTALLATION
I usually symlink all these files into ~/bin, which my ~/.profile sets to be in my PATH.
$> mkdir ~/bin ~/src
$> cd ~/src
$> git clone https://github.com/tobert/perl-ssh-tools.git
$> ln -s ~/src/perl-ssh-tools/* ~/bin/
$> export PATH=~/bin:$PATH
OSX
You'll need libssh2 and the perl modules. If you're using Macports:
$> sudo port install perl
$> sudo port install libssh2
$> sudo /opt/local/bin/perl -MCPAN -e 'install Net::SSH2'
$> sudo /opt/local/bin/perl -MCPAN -e 'install Tie::IxHash'
SEE ALSO
The docs for common options are in DshPerlHostLoop.pm.
perldoc ~/bin/DshPerlHostLoop.pm
All of the utilities have their own POD and use Pod::Usage.
cl-run.pl
cl-rsync.pl
cl-sendfile.pl
cl-gatherfile.pl
cl-ping.pl
cl-killall.pl
cl-psgrep.pl
cl-netstat.pl
AUTHORS
Al Tobey <tobert@gmail.com>
COPYRIGHT AND LICENSE
This software is copyright (c) 2007-2013 by Al Tobey.
This is free software; you can redistribute it and/or modify it under the terms of the Artistic License 2.0. (Note that, unlike the Artistic License 1.0, version 2.0 is GPL compatible by itself, hence there is no benefit to having an Artistic 2.0 / GPL disjunction.) See the file LICENSE for details.
Author: tobert
Source Code: https://github.com/tobert/perl-ssh-tools
1647064260
Run C# scripts from the .NET CLI, define NuGet packages inline and edit/debug them in VS Code - all of that with full language services support from OmniSharp.
Name | Version | Framework(s) |
---|---|---|
dotnet-script (global tool) | net6.0 , net5.0 , netcoreapp3.1 | |
Dotnet.Script (CLI as Nuget) | net6.0 , net5.0 , netcoreapp3.1 | |
Dotnet.Script.Core | netcoreapp3.1 , netstandard2.0 | |
Dotnet.Script.DependencyModel | netstandard2.0 | |
Dotnet.Script.DependencyModel.Nuget | netstandard2.0 |
The only thing we need to install is .NET Core 3.1 or .NET 5.0 SDK.
.NET Core 2.1 introduced the concept of global tools meaning that you can install dotnet-script
using nothing but the .NET CLI.
dotnet tool install -g dotnet-script
You can invoke the tool using the following command: dotnet-script
Tool 'dotnet-script' (version '0.22.0') was successfully installed.
The advantage of this approach is that you can use the same command for installation across all platforms. .NET Core SDK also supports viewing a list of installed tools and their uninstallation.
dotnet tool list -g
Package Id Version Commands
---------------------------------------------
dotnet-script 0.22.0 dotnet-script
dotnet tool uninstall dotnet-script -g
Tool 'dotnet-script' (version '0.22.0') was successfully uninstalled.
choco install dotnet.script
We also provide a PowerShell script for installation.
(new-object Net.WebClient).DownloadString("https://raw.githubusercontent.com/filipw/dotnet-script/master/install/install.ps1") | iex
curl -s https://raw.githubusercontent.com/filipw/dotnet-script/master/install/install.sh | bash
If permission is denied we can try with sudo
curl -s https://raw.githubusercontent.com/filipw/dotnet-script/master/install/install.sh | sudo bash
A Dockerfile for running dotnet-script in a Linux container is available. Build:
cd build
docker build -t dotnet-script -f Dockerfile ..
And run:
docker run -it dotnet-script --version
You can manually download all the releases in zip
format from the GitHub releases page.
Our typical helloworld.csx
might look like this:
Console.WriteLine("Hello world!");
That is all it takes and we can execute the script. Args are accessible via the global Args array.
dotnet script helloworld.csx
Simply create a folder somewhere on your system and issue the following command.
dotnet script init
This will create main.csx
along with the launch configuration needed to debug the script in VS Code.
.
├── .vscode
│ └── launch.json
├── main.csx
└── omnisharp.json
We can also initialize a folder using a custom filename.
dotnet script init custom.csx
Instead of main.csx
which is the default, we now have a file named custom.csx
.
.
├── .vscode
│ └── launch.json
├── custom.csx
└── omnisharp.json
Note: Executing
dotnet script init
inside a folder that already contains one or more script files will not create themain.csx
file.
Scripts can be executed directly from the shell as if they were executables.
foo.csx arg1 arg2 arg3
OSX/Linux
Just like all scripts, on OSX/Linux you need to have a
#!
and mark the file as executable via chmod +x foo.csx. If you use dotnet script init to create your csx it will automatically have the#!
directive and be marked as executable.
The OSX/Linux shebang directive should be #!/usr/bin/env dotnet-script
#!/usr/bin/env dotnet-script
Console.WriteLine("Hello world");
You can execute your script using dotnet script or dotnet-script, which allows you to pass arguments to control your script execution more.
foo.csx arg1 arg2 arg3
dotnet script foo.csx -- arg1 arg2 arg3
dotnet-script foo.csx -- arg1 arg2 arg3
All arguments after --
are passed to the script in the following way:
dotnet script foo.csx -- arg1 arg2 arg3
Then you can access the arguments in the script context using the global Args
collection:
foreach (var arg in Args)
{
Console.WriteLine(arg);
}
All arguments before --
are processed by dotnet script
. For example, the following command-line
dotnet script -d foo.csx -- -d
will pass the -d
before --
to dotnet script
and enable the debug mode whereas the -d
after --
is passed to script for its own interpretation of the argument.
dotnet script
has built-in support for referencing NuGet packages directly from within the script.
#r "nuget: AutoMapper, 6.1.0"
Note: Omnisharp needs to be restarted after adding a new package reference
We can define package sources using a NuGet.Config
file in the script root folder. In addition to being used during execution of the script, it will also be used by OmniSharp
that provides language services for packages resolved from these package sources.
As an alternative to maintaining a local NuGet.Config
file we can define these package sources globally either at the user level or at the computer level as described in Configuring NuGet Behaviour
It is also possible to specify packages sources when executing the script.
dotnet script foo.csx -s https://SomePackageSource
Multiple packages sources can be specified like this:
dotnet script foo.csx -s https://SomePackageSource -s https://AnotherPackageSource
Dotnet-Script can create a standalone executable or DLL for your script.
Switch | Long switch | description |
---|---|---|
-o | --output | Directory where the published executable should be placed. Defaults to a 'publish' folder in the current directory. |
-n | --name | The name for the generated DLL (executable not supported at this time). Defaults to the name of the script. |
--dll | Publish to a .dll instead of an executable. | |
-c | --configuration | Configuration to use for publishing the script [Release/Debug]. Default is "Debug" |
-d | --debug | Enables debug output. |
-r | --runtime | The runtime used when publishing the self contained executable. Defaults to your current runtime. |
The executable you can run directly independent of dotnet install, while the DLL can be run using the dotnet CLI like this:
dotnet script exec {path_to_dll} -- arg1 arg2
We provide two types of caching, the dependency cache
and the execution cache
which is explained in detail below. In order for any of these caches to be enabled, it is required that all NuGet package references are specified using an exact version number. The reason for this constraint is that we need to make sure that we don't execute a script with a stale dependency graph.
In order to resolve the dependencies for a script, a dotnet restore
is executed under the hood to produce a project.assets.json
file from which we can figure out all the dependencies we need to add to the compilation. This is an out-of-process operation and represents a significant overhead to the script execution. So this cache works by looking at all the dependencies specified in the script(s) either in the form of NuGet package references or assembly file references. If these dependencies matches the dependencies from the last script execution, we skip the restore and read the dependencies from the already generated project.assets.json
file. If any of the dependencies has changed, we must restore again to obtain the new dependency graph.
In order to execute a script it needs to be compiled first and since that is a CPU and time consuming operation, we make sure that we only compile when the source code has changed. This works by creating a SHA256 hash from all the script files involved in the execution. This hash is written to a temporary location along with the DLL that represents the result of the script compilation. When a script is executed the hash is computed and compared with the hash from the previous compilation. If they match there is no need to recompile and we run from the already compiled DLL. If the hashes don't match, the cache is invalidated and we recompile.
You can override this automatic caching by passing --no-cache flag, which will bypass both caches and cause dependency resolution and script compilation to happen every time we execute the script.
The temporary location used for caches is a sub-directory named dotnet-script
under (in order of priority):
DOTNET_SCRIPT_CACHE_LOCATION
, if defined and value is not empty.$XDG_CACHE_HOME
if defined otherwise $HOME/.cache
~/Library/Caches
Path.GetTempPath
for the platform.The days of debugging scripts using Console.WriteLine
are over. One major feature of dotnet script
is the ability to debug scripts directly in VS Code. Just set a breakpoint anywhere in your script file(s) and hit F5(start debugging)
Script packages are a way of organizing reusable scripts into NuGet packages that can be consumed by other scripts. This means that we now can leverage scripting infrastructure without the need for any kind of bootstrapping.
A script package is just a regular NuGet package that contains script files inside the content
or contentFiles
folder.
The following example shows how the scripts are laid out inside the NuGet package according to the standard convention .
└── contentFiles
└── csx
└── netstandard2.0
└── main.csx
This example contains just the main.csx
file in the root folder, but packages may have multiple script files either in the root folder or in subfolders below the root folder.
When loading a script package we will look for an entry point script to be loaded. This entry point script is identified by one of the following.
main.csx
in the root folderIf the entry point script cannot be determined, we will simply load all the scripts files in the package.
The advantage with using an entry point script is that we can control loading other scripts from the package.
To consume a script package all we need to do specify the NuGet package in the #load
directive.
The following example loads the simple-targets package that contains script files to be included in our script.
#load "nuget:simple-targets-csx, 6.0.0"
using static SimpleTargets;
var targets = new TargetDictionary();
targets.Add("default", () => Console.WriteLine("Hello, world!"));
Run(Args, targets);
Note: Debugging also works for script packages so that we can easily step into the scripts that are brought in using the
#load
directive.
Scripts don't actually have to exist locally on the machine. We can also execute scripts that are made available on an http(s)
endpoint.
This means that we can create a Gist on Github and execute it just by providing the URL to the Gist.
This Gist contains a script that prints out "Hello World"
We can execute the script like this
dotnet script https://gist.githubusercontent.com/seesharper/5d6859509ea8364a1fdf66bbf5b7923d/raw/0a32bac2c3ea807f9379a38e251d93e39c8131cb/HelloWorld.csx
That is a pretty long URL, so why don't make it a TinyURL like this:
dotnet script https://tinyurl.com/y8cda9zt
A pretty common scenario is that we have logic that is relative to the script path. We don't want to require the user to be in a certain directory for these paths to resolve correctly so here is how to provide the script path and the script folder regardless of the current working directory.
public static string GetScriptPath([CallerFilePath] string path = null) => path;
public static string GetScriptFolder([CallerFilePath] string path = null) => Path.GetDirectoryName(path);
Tip: Put these methods as top level methods in a separate script file and
#load
that file wherever access to the script path and/or folder is needed.
This release contains a C# REPL (Read-Evaluate-Print-Loop). The REPL mode ("interactive mode") is started by executing dotnet-script
without any arguments.
The interactive mode allows you to supply individual C# code blocks and have them executed as soon as you press Enter. The REPL is configured with the same default set of assembly references and using statements as regular CSX script execution.
Once dotnet-script
starts you will see a prompt for input. You can start typing C# code there.
~$ dotnet script
> var x = 1;
> x+x
2
If you submit an unterminated expression into the REPL (no ;
at the end), it will be evaluated and the result will be serialized using a formatter and printed in the output. This is a bit more interesting than just calling ToString()
on the object, because it attempts to capture the actual structure of the object. For example:
~$ dotnet script
> var x = new List<string>();
> x.Add("foo");
> x
List<string>(1) { "foo" }
> x.Add("bar");
> x
List<string>(2) { "foo", "bar" }
>
REPL also supports inline Nuget packages - meaning the Nuget packages can be installed into the REPL from within the REPL. This is done via our #r
and #load
from Nuget support and uses identical syntax.
~$ dotnet script
> #r "nuget: Automapper, 6.1.1"
> using AutoMapper;
> typeof(MapperConfiguration)
[AutoMapper.MapperConfiguration]
> #load "nuget: simple-targets-csx, 6.0.0";
> using static SimpleTargets;
> typeof(TargetDictionary)
[Submission#0+SimpleTargets+TargetDictionary]
Using Roslyn syntax parsing, we also support multiline REPL mode. This means that if you have an uncompleted code block and press Enter, we will automatically enter the multiline mode. The mode is indicated by the *
character. This is particularly useful for declaring classes and other more complex constructs.
~$ dotnet script
> class Foo {
* public string Bar {get; set;}
* }
> var foo = new Foo();
Aside from the regular C# script code, you can invoke the following commands (directives) from within the REPL:
Command | Description |
---|---|
#load | Load a script into the REPL (same as #load usage in CSX) |
#r | Load an assembly into the REPL (same as #r usage in CSX) |
#reset | Reset the REPL back to initial state (without restarting it) |
#cls | Clear the console screen without resetting the REPL state |
#exit | Exits the REPL |
You can execute a CSX script and, at the end of it, drop yourself into the context of the REPL. This way, the REPL becomes "seeded" with your code - all the classes, methods or variables are available in the REPL context. This is achieved by running a script with an -i
flag.
For example, given the following CSX script:
var msg = "Hello World";
Console.WriteLine(msg);
When you run this with the -i
flag, Hello World
is printed, REPL starts and msg
variable is available in the REPL context.
~$ dotnet script foo.csx -i
Hello World
>
You can also seed the REPL from inside the REPL - at any point - by invoking a #load
directive pointed at a specific file. For example:
~$ dotnet script
> #load "foo.csx"
Hello World
>
The following example shows how we can pipe data in and out of a script.
The UpperCase.csx
script simply converts the standard input to upper case and writes it back out to standard output.
using (var streamReader = new StreamReader(Console.OpenStandardInput()))
{
Write(streamReader.ReadToEnd().ToUpper());
}
We can now simply pipe the output from one command into our script like this.
echo "This is some text" | dotnet script UpperCase.csx
THIS IS SOME TEXT
The first thing we need to do add the following to the launch.config
file that allows VS Code to debug a running process.
{
"name": ".NET Core Attach",
"type": "coreclr",
"request": "attach",
"processId": "${command:pickProcess}"
}
To debug this script we need a way to attach the debugger in VS Code and the simplest thing we can do here is to wait for the debugger to attach by adding this method somewhere.
public static void WaitForDebugger()
{
Console.WriteLine("Attach Debugger (VS Code)");
while(!Debugger.IsAttached)
{
}
}
To debug the script when executing it from the command line we can do something like
WaitForDebugger();
using (var streamReader = new StreamReader(Console.OpenStandardInput()))
{
Write(streamReader.ReadToEnd().ToUpper()); // <- SET BREAKPOINT HERE
}
Now when we run the script from the command line we will get
$ echo "This is some text" | dotnet script UpperCase.csx
Attach Debugger (VS Code)
This now gives us a chance to attach the debugger before stepping into the script and from VS Code, select the .NET Core Attach
debugger and pick the process that represents the executing script.
Once that is done we should see our breakpoint being hit.
By default, scripts will be compiled using the debug
configuration. This is to ensure that we can debug a script in VS Code as well as attaching a debugger for long running scripts.
There are however situations where we might need to execute a script that is compiled with the release
configuration. For instance, running benchmarks using BenchmarkDotNet is not possible unless the script is compiled with the release
configuration.
We can specify this when executing the script.
dotnet script foo.csx -c release
Starting from version 0.50.0, dotnet-script
supports .Net Core 3.0 and all the C# 8 features. The way we deal with nullable references types in dotnet-script
is that we turn every warning related to nullable reference types into compiler errors. This means every warning between CS8600
and CS8655
are treated as an error when compiling the script.
Nullable references types are turned off by default and the way we enable it is using the #nullable enable
compiler directive. This means that existing scripts will continue to work, but we can now opt-in on this new feature.
#!/usr/bin/env dotnet-script
#nullable enable
string name = null;
Trying to execute the script will result in the following error
main.csx(5,15): error CS8625: Cannot convert null literal to non-nullable reference type.
We will also see this when working with scripts in VS Code under the problems panel.
Download Details:
Author: filipw
Source Code: https://github.com/filipw/dotnet-script
License: MIT License
1593156510
At the end of 2019, Python is one of the fastest-growing programming languages. More than 10% of developers have opted for Python development.
In the programming world, Data types play an important role. Each Variable is stored in different data types and responsible for various functions. Python had two different objects, and They are mutable and immutable objects.
Table of Contents hide
III Built-in data types in Python
The Size and declared value and its sequence of the object can able to be modified called mutable objects.
Mutable Data Types are list, dict, set, byte array
The Size and declared value and its sequence of the object can able to be modified.
Immutable data types are int, float, complex, String, tuples, bytes, and frozen sets.
id() and type() is used to know the Identity and data type of the object
a**=25+**85j
type**(a)**
output**:<class’complex’>**
b**={1:10,2:“Pinky”****}**
id**(b)**
output**:**238989244168
a**=str(“Hello python world”)****#str**
b**=int(18)****#int**
c**=float(20482.5)****#float**
d**=complex(5+85j)****#complex**
e**=list((“python”,“fast”,“growing”,“in”,2018))****#list**
f**=tuple((“python”,“easy”,“learning”))****#tuple**
g**=range(10)****#range**
h**=dict(name=“Vidu”,age=36)****#dict**
i**=set((“python”,“fast”,“growing”,“in”,2018))****#set**
j**=frozenset((“python”,“fast”,“growing”,“in”,2018))****#frozenset**
k**=bool(18)****#bool**
l**=bytes(8)****#bytes**
m**=bytearray(8)****#bytearray**
n**=memoryview(bytes(18))****#memoryview**
Numbers are stored in numeric Types. when a number is assigned to a variable, Python creates Number objects.
#signed interger
age**=**18
print**(age)**
Output**:**18
Python supports 3 types of numeric data.
int (signed integers like 20, 2, 225, etc.)
float (float is used to store floating-point numbers like 9.8, 3.1444, 89.52, etc.)
complex (complex numbers like 8.94j, 4.0 + 7.3j, etc.)
A complex number contains an ordered pair, i.e., a + ib where a and b denote the real and imaginary parts respectively).
The string can be represented as the sequence of characters in the quotation marks. In python, to define strings we can use single, double, or triple quotes.
# String Handling
‘Hello Python’
#single (') Quoted String
“Hello Python”
# Double (") Quoted String
“”“Hello Python”“”
‘’‘Hello Python’‘’
# triple (‘’') (“”") Quoted String
In python, string handling is a straightforward task, and python provides various built-in functions and operators for representing strings.
The operator “+” is used to concatenate strings and “*” is used to repeat the string.
“Hello”+“python”
output**:****‘Hello python’**
"python "*****2
'Output : Python python ’
#python web development #data types in python #list of all python data types #python data types #python datatypes #python types #python variable type
1661577180
The following is a collection of tips I find to be useful when working with the Swift language. More content is available on my Twitter account!
Property Wrappers allow developers to wrap properties with specific behaviors, that will be seamlessly triggered whenever the properties are accessed.
While their primary use case is to implement business logic within our apps, it's also possible to use Property Wrappers as debugging tools!
For example, we could build a wrapper called @History
, that would be added to a property while debugging and would keep track of all the values set to this property.
import Foundation
@propertyWrapper
struct History<Value> {
private var value: Value
private(set) var history: [Value] = []
init(wrappedValue: Value) {
self.value = wrappedValue
}
var wrappedValue: Value {
get { value }
set {
history.append(value)
value = newValue
}
}
var projectedValue: Self {
return self
}
}
// We can then decorate our business code
// with the `@History` wrapper
struct User {
@History var name: String = ""
}
var user = User()
// All the existing call sites will still
// compile, without the need for any change
user.name = "John"
user.name = "Jane"
// But now we can also access an history of
// all the previous values!
user.$name.history // ["", "John"]
String
interpolationSwift 5 gave us the possibility to define our own custom String
interpolation methods.
This feature can be used to power many use cases, but there is one that is guaranteed to make sense in most projects: localizing user-facing strings.
import Foundation
extension String.StringInterpolation {
mutating func appendInterpolation(localized key: String, _ args: CVarArg...) {
let localized = String(format: NSLocalizedString(key, comment: ""), arguments: args)
appendLiteral(localized)
}
}
/*
Let's assume that this is the content of our Localizable.strings:
"welcome.screen.greetings" = "Hello %@!";
*/
let userName = "John"
print("\(localized: "welcome.screen.greetings", userName)") // Hello John!
structs
If you’ve always wanted to use some kind of inheritance mechanism for your structs, Swift 5.1 is going to make you very happy!
Using the new KeyPath-based dynamic member lookup, you can implement some pseudo-inheritance, where a type inherits the API of another one 🎉
(However, be careful, I’m definitely not advocating inheritance as a go-to solution 🙃)
import Foundation
protocol Inherits {
associatedtype SuperType
var `super`: SuperType { get }
}
extension Inherits {
subscript<T>(dynamicMember keyPath: KeyPath<SuperType, T>) -> T {
return self.`super`[keyPath: keyPath]
}
}
struct Person {
let name: String
}
@dynamicMemberLookup
struct User: Inherits {
let `super`: Person
let login: String
let password: String
}
let user = User(super: Person(name: "John Appleseed"), login: "Johnny", password: "1234")
user.name // "John Appleseed"
user.login // "Johnny"
NSAttributedString
through a Function BuilderSwift 5.1 introduced Function Builders: a great tool for building custom DSL syntaxes, like SwiftUI. However, one doesn't need to be building a full-fledged DSL in order to leverage them.
For example, it's possible to write a simple Function Builder, whose job will be to compose together individual instances of NSAttributedString
through a nicer syntax than the standard API.
import UIKit
@_functionBuilder
class NSAttributedStringBuilder {
static func buildBlock(_ components: NSAttributedString...) -> NSAttributedString {
let result = NSMutableAttributedString(string: "")
return components.reduce(into: result) { (result, current) in result.append(current) }
}
}
extension NSAttributedString {
class func composing(@NSAttributedStringBuilder _ parts: () -> NSAttributedString) -> NSAttributedString {
return parts()
}
}
let result = NSAttributedString.composing {
NSAttributedString(string: "Hello",
attributes: [.font: UIFont.systemFont(ofSize: 24),
.foregroundColor: UIColor.red])
NSAttributedString(string: " world!",
attributes: [.font: UIFont.systemFont(ofSize: 20),
.foregroundColor: UIColor.orange])
}
switch
and if
as expressionsContrary to other languages, like Kotlin, Swift does not allow switch
and if
to be used as expressions. Meaning that the following code is not valid Swift:
let constant = if condition {
someValue
} else {
someOtherValue
}
A common solution to this problem is to wrap the if
or switch
statement within a closure, that will then be immediately called. While this approach does manage to achieve the desired goal, it makes for a rather poor syntax.
To avoid the ugly trailing ()
and improve on the readability, you can define a resultOf
function, that will serve the exact same purpose, in a more elegant way.
import Foundation
func resultOf<T>(_ code: () -> T) -> T {
return code()
}
let randomInt = Int.random(in: 0...3)
let spelledOut: String = resultOf {
switch randomInt {
case 0:
return "Zero"
case 1:
return "One"
case 2:
return "Two"
case 3:
return "Three"
default:
return "Out of range"
}
}
print(spelledOut)
guard
statementsA guard
statement is a very convenient way for the developer to assert that a condition is met, in order for the execution of the program to keep going.
However, since the body of a guard
statement is meant to be executed when the condition evaluates to false
, the use of the negation (!
) operator within the condition of a guard
statement can make the code hard to read, as it becomes a double negative.
A nice trick to avoid such double negatives is to encapsulate the use of the !
operator within a new property or function, whose name does not include a negative.
import Foundation
extension Collection {
var hasElements: Bool {
return !isEmpty
}
}
let array = Bool.random() ? [1, 2, 3] : []
guard array.hasElements else { fatalError("array was empty") }
print(array)
init
without loosing the compiler-generated oneIt's common knowledge for Swift developers that, when you define a struct
, the compiler is going to automatically generate a memberwise init
for you. That is, unless you also define an init
of your own. Because then, the compiler won't generate any memberwise init
.
Yet, there are many instances where we might enjoy the opportunity to get both. As it turns out, this goal is quite easy to achieve: you just need to define your own init
in an extension
rather than inside the type definition itself.
import Foundation
struct Point {
let x: Int
let y: Int
}
extension Point {
init() {
x = 0
y = 0
}
}
let usingDefaultInit = Point(x: 4, y: 3)
let usingCustomInit = Point()
enum
Swift does not really have an out-of-the-box support of namespaces. One could argue that a Swift module can be seen as a namespace, but creating a dedicated Framework for this sole purpose can legitimately be regarded as overkill.
Some developers have taken the habit to use a struct
which only contains static
fields to implement a namespace. While this does the job, it requires us to remember to implement an empty private
init()
, because it wouldn't make sense for such a struct
to be instantiated.
It's actually possible to take this approach one step further, by replacing the struct
with an enum
. While it might seem weird to have an enum
with no case
, it's actually a very idiomatic way to declare a type that cannot be instantiated.
import Foundation
enum NumberFormatterProvider {
static var currencyFormatter: NumberFormatter {
let formatter = NumberFormatter()
formatter.numberStyle = .currency
formatter.roundingIncrement = 0.01
return formatter
}
static var decimalFormatter: NumberFormatter {
let formatter = NumberFormatter()
formatter.numberStyle = .decimal
formatter.decimalSeparator = ","
return formatter
}
}
NumberFormatterProvider() // ❌ impossible to instantiate by mistake
NumberFormatterProvider.currencyFormatter.string(from: 2.456) // $2.46
NumberFormatterProvider.decimalFormatter.string(from: 2.456) // 2,456
Never
to represent impossible code pathsNever
is quite a peculiar type in the Swift Standard Library: it is defined as an empty enum enum Never { }
.
While this might seem odd at first glance, it actually yields a very interesting property: it makes it a type that cannot be constructed (i.e. it possesses no instances).
This way, Never
can be used as a generic parameter to let the compiler know that a particular feature will not be used.
import Foundation
enum Result<Value, Error> {
case success(value: Value)
case failure(error: Error)
}
func willAlwaysSucceed(_ completion: @escaping ((Result<String, Never>) -> Void)) {
completion(.success(value: "Call was successful"))
}
willAlwaysSucceed( { result in
switch result {
case .success(let value):
print(value)
// the compiler knows that the `failure` case cannot happen
// so it doesn't require us to handle it.
}
})
Decodable
enum
Swift's Codable
framework does a great job at seamlessly decoding entities from a JSON stream. However, when we integrate web-services, we are sometimes left to deal with JSONs that require behaviors that Codable
does not provide out-of-the-box.
For instance, we might have a string-based or integer-based enum
, and be required to set it to a default value when the data found in the JSON does not match any of its cases.
We might be tempted to implement this via an extensive switch
statement over all the possible cases, but there is a much shorter alternative through the initializer init?(rawValue:)
:
import Foundation
enum State: String, Decodable {
case active
case inactive
case undefined
init(from decoder: Decoder) throws {
let container = try decoder.singleValueContainer()
let decodedString = try container.decode(String.self)
self = State(rawValue: decodedString) ?? .undefined
}
}
let data = """
["active", "inactive", "foo"]
""".data(using: .utf8)!
let decoded = try! JSONDecoder().decode([State].self, from: data)
print(decoded) // [State.active, State.inactive, State.undefined]
Dependency injection boils down to a simple idea: when an object requires a dependency, it shouldn't create it by itself, but instead it should be given a function that does it for him.
Now the great thing with Swift is that, not only can a function take another function as a parameter, but that parameter can also be given a default value.
When you combine both those features, you can end up with a dependency injection pattern that is both lightweight on boilerplate, but also type safe.
import Foundation
protocol Service {
func call() -> String
}
class ProductionService: Service {
func call() -> String {
return "This is the production"
}
}
class MockService: Service {
func call() -> String {
return "This is a mock"
}
}
typealias Provider<T> = () -> T
class Controller {
let service: Service
init(serviceProvider: Provider<Service> = { return ProductionService() }) {
self.service = serviceProvider()
}
func work() {
print(service.call())
}
}
let productionController = Controller()
productionController.work() // prints "This is the production"
let mockedController = Controller(serviceProvider: { return MockService() })
mockedController.work() // prints "This is a mock"
Singletons are pretty bad. They make your architecture rigid and tightly coupled, which then results in your code being hard to test and refactor. Instead of using singletons, your code should rely on dependency injection, which is a much more architecturally sound approach.
But singletons are so easy to use, and dependency injection requires us to do extra-work. So maybe, for simple situations, we could find an in-between solution?
One possible solution is to rely on one of Swift's most know features: protocol-oriented programming. Using a protocol
, we declare and access our dependency. We then store it in a private singleton, and perform the injection through an extension of said protocol
.
This way, our code will indeed be decoupled from its dependency, while at the same time keeping the boilerplate to a minimum.
import Foundation
protocol Formatting {
var formatter: NumberFormatter { get }
}
private let sharedFormatter: NumberFormatter = {
let sharedFormatter = NumberFormatter()
sharedFormatter.numberStyle = .currency
return sharedFormatter
}()
extension Formatting {
var formatter: NumberFormatter { return sharedFormatter }
}
class ViewModel: Formatting {
var displayableAmount: String?
func updateDisplay(to amount: Double) {
displayableAmount = formatter.string(for: amount)
}
}
let viewModel = ViewModel()
viewModel.updateDisplay(to: 42000.45)
viewModel.displayableAmount // "$42,000.45"
[weak self]
and guard
Callbacks are a part of almost all iOS apps, and as frameworks such as RxSwift
keep gaining in popularity, they become ever more present in our codebase.
Seasoned Swift developers are aware of the potential memory leaks that @escaping
callbacks can produce, so they make real sure to always use [weak self]
, whenever they need to use self
inside such a context. And when they need to have self
be non-optional, they then add a guard
statement along.
Consequently, this syntax of a [weak self]
followed by a guard
rapidly tends to appear everywhere in the codebase. The good thing is that, through a little protocol-oriented trick, it's actually possible to get rid of this tedious syntax, without loosing any of its benefits!
import Foundation
import PlaygroundSupport
PlaygroundPage.current.needsIndefiniteExecution = true
protocol Weakifiable: class { }
extension Weakifiable {
func weakify(_ code: @escaping (Self) -> Void) -> () -> Void {
return { [weak self] in
guard let self = self else { return }
code(self)
}
}
func weakify<T>(_ code: @escaping (T, Self) -> Void) -> (T) -> Void {
return { [weak self] arg in
guard let self = self else { return }
code(arg, self)
}
}
}
extension NSObject: Weakifiable { }
class Producer: NSObject {
deinit {
print("deinit Producer")
}
private var handler: (Int) -> Void = { _ in }
func register(handler: @escaping (Int) -> Void) {
self.handler = handler
DispatchQueue.main.asyncAfter(deadline: .now() + 1.0, execute: { self.handler(42) })
}
}
class Consumer: NSObject {
deinit {
print("deinit Consumer")
}
let producer = Producer()
func consume() {
producer.register(handler: weakify { result, strongSelf in
strongSelf.handle(result)
})
}
private func handle(_ result: Int) {
print("🎉 \(result)")
}
}
var consumer: Consumer? = Consumer()
consumer?.consume()
DispatchQueue.main.asyncAfter(deadline: .now() + 2.0, execute: { consumer = nil })
// This code prints:
// 🎉 42
// deinit Consumer
// deinit Producer
Asynchronous functions are a big part of iOS APIs, and most developers are familiar with the challenge they pose when one needs to sequentially call several asynchronous APIs.
This often results in callbacks being nested into one another, a predicament often referred to as callback hell.
Many third-party frameworks are able to tackle this issue, for instance RxSwift or PromiseKit. Yet, for simple instances of the problem, there is no need to use such big guns, as it can actually be solved with simple function composition.
import Foundation
typealias CompletionHandler<Result> = (Result?, Error?) -> Void
infix operator ~>: MultiplicationPrecedence
func ~> <T, U>(_ first: @escaping (CompletionHandler<T>) -> Void, _ second: @escaping (T, CompletionHandler<U>) -> Void) -> (CompletionHandler<U>) -> Void {
return { completion in
first({ firstResult, error in
guard let firstResult = firstResult else { completion(nil, error); return }
second(firstResult, { (secondResult, error) in
completion(secondResult, error)
})
})
}
}
func ~> <T, U>(_ first: @escaping (CompletionHandler<T>) -> Void, _ transform: @escaping (T) -> U) -> (CompletionHandler<U>) -> Void {
return { completion in
first({ result, error in
guard let result = result else { completion(nil, error); return }
completion(transform(result), nil)
})
}
}
func service1(_ completionHandler: CompletionHandler<Int>) {
completionHandler(42, nil)
}
func service2(arg: String, _ completionHandler: CompletionHandler<String>) {
completionHandler("🎉 \(arg)", nil)
}
let chainedServices = service1
~> { int in return String(int / 2) }
~> service2
chainedServices({ result, _ in
guard let result = result else { return }
print(result) // Prints: 🎉 21
})
Asynchronous functions are a great way to deal with future events without blocking a thread. Yet, there are times where we would like them to behave in exactly such a blocking way.
Think about writing unit tests and using mocked network calls. You will need to add complexity to your test in order to deal with asynchronous functions, whereas synchronous ones would be much easier to manage.
Thanks to Swift proficiency in the functional paradigm, it is possible to write a function whose job is to take an asynchronous function and transform it into a synchronous one.
import Foundation
func makeSynchrone<A, B>(_ asyncFunction: @escaping (A, (B) -> Void) -> Void) -> (A) -> B {
return { arg in
let lock = NSRecursiveLock()
var result: B? = nil
asyncFunction(arg) {
result = $0
lock.unlock()
}
lock.lock()
return result!
}
}
func myAsyncFunction(arg: Int, completionHandler: (String) -> Void) {
completionHandler("🎉 \(arg)")
}
let syncFunction = makeSynchrone(myAsyncFunction)
print(syncFunction(42)) // prints 🎉 42
Closures are a great way to interact with generic APIs, for instance APIs that allow to manipulate data structures through the use of generic functions, such as filter()
or sorted()
.
The annoying part is that closures tend to clutter your code with many instances of {
, }
and $0
, which can quickly undermine its readably.
A nice alternative for a cleaner syntax is to use a KeyPath
instead of a closure, along with an operator that will deal with transforming the provided KeyPath
in a closure.
import Foundation
prefix operator ^
prefix func ^ <Element, Attribute>(_ keyPath: KeyPath<Element, Attribute>) -> (Element) -> Attribute {
return { element in element[keyPath: keyPath] }
}
struct MyData {
let int: Int
let string: String
}
let data = [MyData(int: 2, string: "Foo"), MyData(int: 4, string: "Bar")]
data.map(^\.int) // [2, 4]
data.map(^\.string) // ["Foo", "Bar"]
userInfo
Dictionary
Many iOS APIs still rely on a userInfo
Dictionary
to handle use-case specific data. This Dictionary
usually stores untyped values, and is declared as follows: [String: Any]
(or sometimes [AnyHashable: Any]
.
Retrieving data from such a structure will involve some conditional casting (via the as?
operator), which is prone to both errors and repetitions. Yet, by introducing a custom subscript
, it's possible to encapsulate all the tedious logic, and end-up with an easier and more robust API.
import Foundation
typealias TypedUserInfoKey<T> = (key: String, type: T.Type)
extension Dictionary where Key == String, Value == Any {
subscript<T>(_ typedKey: TypedUserInfoKey<T>) -> T? {
return self[typedKey.key] as? T
}
}
let userInfo: [String : Any] = ["Foo": 4, "Bar": "forty-two"]
let integerTypedKey = TypedUserInfoKey(key: "Foo", type: Int.self)
let intValue = userInfo[integerTypedKey] // returns 4
type(of: intValue) // returns Int?
let stringTypedKey = TypedUserInfoKey(key: "Bar", type: String.self)
let stringValue = userInfo[stringTypedKey] // returns "forty-two"
type(of: stringValue) // returns String?
MVVM is a great pattern to separate business logic from presentation logic. The main challenge to make it work, is to define a mechanism for the presentation layer to be notified of model updates.
RxSwift is a perfect choice to solve such a problem. Yet, some developers don't feel confortable with leveraging a third-party library for such a central part of their architecture.
For those situation, it's possible to define a lightweight Variable
type, that will make the MVVM pattern very easy to use!
import Foundation
class Variable<Value> {
var value: Value {
didSet {
onUpdate?(value)
}
}
var onUpdate: ((Value) -> Void)? {
didSet {
onUpdate?(value)
}
}
init(_ value: Value, _ onUpdate: ((Value) -> Void)? = nil) {
self.value = value
self.onUpdate = onUpdate
self.onUpdate?(value)
}
}
let variable: Variable<String?> = Variable(nil)
variable.onUpdate = { data in
if let data = data {
print(data)
}
}
variable.value = "Foo"
variable.value = "Bar"
// prints:
// Foo
// Bar
typealias
to its fullestThe keyword typealias
allows developers to give a new name to an already existing type. For instance, Swift defines Void
as a typealias
of ()
, the empty tuple.
But a less known feature of this mechanism is that it allows to assign concrete types for generic parameters, or to rename them. This can help make the semantics of generic types much clearer, when used in specific use cases.
import Foundation
enum Either<Left, Right> {
case left(Left)
case right(Right)
}
typealias Result<Value> = Either<Value, Error>
typealias IntOrString = Either<Int, String>
forEach
Iterating through objects via the forEach(_:)
method is a great alternative to the classic for
loop, as it allows our code to be completely oblivious of the iteration logic. One limitation, however, is that forEach(_:)
does not allow to stop the iteration midway.
Taking inspiration from the Objective-C implementation, we can write an overload that will allow the developer to stop the iteration, if needed.
import Foundation
extension Sequence {
func forEach(_ body: (Element, _ stop: inout Bool) throws -> Void) rethrows {
var stop = false
for element in self {
try body(element, &stop)
if stop {
return
}
}
}
}
["Foo", "Bar", "FooBar"].forEach { element, stop in
print(element)
stop = (element == "Bar")
}
// Prints:
// Foo
// Bar
reduce()
Functional programing is a great way to simplify a codebase. For instance, reduce
is an alternative to the classic for
loop, without most the boilerplate. Unfortunately, simplicity often comes at the price of performance.
Consider that you want to remove duplicate values from a Sequence
. While reduce()
is a perfectly fine way to express this computation, the performance will be sub optimal, because of all the unnecessary Array
copying that will happen every time its closure gets called.
That's when reduce(into:_:)
comes into play. This version of reduce
leverages the capacities of copy-on-write type (such as Array
or Dictionnary
) in order to avoid unnecessary copying, which results in a great performance boost.
import Foundation
func time(averagedExecutions: Int = 1, _ code: () -> Void) {
let start = Date()
for _ in 0..<averagedExecutions { code() }
let end = Date()
let duration = end.timeIntervalSince(start) / Double(averagedExecutions)
print("time: \(duration)")
}
let data = (1...1_000).map { _ in Int(arc4random_uniform(256)) }
// runs in 0.63s
time {
let noDuplicates: [Int] = data.reduce([], { $0.contains($1) ? $0 : $0 + [$1] })
}
// runs in 0.15s
time {
let noDuplicates: [Int] = data.reduce(into: [], { if !$0.contains($1) { $0.append($1) } } )
}
UI components such as UITableView
and UICollectionView
rely on reuse identifiers in order to efficiently recycle the views they display. Often, those reuse identifiers take the form of a static hardcoded String
, that will be used for every instance of their class.
Through protocol-oriented programing, it's possible to avoid those hardcoded values, and instead use the name of the type as a reuse identifier.
import Foundation
import UIKit
protocol Reusable {
static var reuseIdentifier: String { get }
}
extension Reusable {
static var reuseIdentifier: String {
return String(describing: self)
}
}
extension UITableViewCell: Reusable { }
extension UITableView {
func register<T: UITableViewCell>(_ class: T.Type) {
register(`class`, forCellReuseIdentifier: T.reuseIdentifier)
}
func dequeueReusableCell<T: UITableViewCell>(for indexPath: IndexPath) -> T {
return dequeueReusableCell(withIdentifier: T.reuseIdentifier, for: indexPath) as! T
}
}
class MyCell: UITableViewCell { }
let tableView = UITableView()
tableView.register(MyCell.self)
let myCell: MyCell = tableView.dequeueReusableCell(for: [0, 0])
The C language has a construct called union
, that allows a single variable to hold values from different types. While Swift does not provide such a construct, it provides enums with associated values, which allows us to define a type called Either
that implements a union
of two types.
import Foundation
enum Either<A, B> {
case left(A)
case right(B)
func either(ifLeft: ((A) -> Void)? = nil, ifRight: ((B) -> Void)? = nil) {
switch self {
case let .left(a):
ifLeft?(a)
case let .right(b):
ifRight?(b)
}
}
}
extension Bool { static func random() -> Bool { return arc4random_uniform(2) == 0 } }
var intOrString: Either<Int, String> = Bool.random() ? .left(2) : .right("Foo")
intOrString.either(ifLeft: { print($0 + 1) }, ifRight: { print($0 + "Bar") })
If you're interested by this kind of data structure, I strongly recommend that you learn more about Algebraic Data Types.
Most of the time, when we create a .xib
file, we give it the same name as its associated class. From that, if we later refactor our code and rename such a class, we run the risk of forgetting to rename the associated .xib
.
While the error will often be easy to catch, if the .xib
is used in a remote section of its app, it might go unnoticed for sometime. Fortunately it's possible to build custom test predicates that will assert that 1) for a given class, there exists a .nib
with the same name in a given Bundle
, 2) for all the .nib
in a given Bundle
, there exists a class with the same name.
import XCTest
public func XCTAssertClassHasNib(_ class: AnyClass, bundle: Bundle, file: StaticString = #file, line: UInt = #line) {
let associatedNibURL = bundle.url(forResource: String(describing: `class`), withExtension: "nib")
XCTAssertNotNil(associatedNibURL, "Class \"\(`class`)\" has no associated nib file", file: file, line: line)
}
public func XCTAssertNibHaveClasses(_ bundle: Bundle, file: StaticString = #file, line: UInt = #line) {
guard let bundleName = bundle.infoDictionary?["CFBundleName"] as? String,
let basePath = bundle.resourcePath,
let enumerator = FileManager.default.enumerator(at: URL(fileURLWithPath: basePath),
includingPropertiesForKeys: nil,
options: [.skipsHiddenFiles, .skipsSubdirectoryDescendants]) else { return }
var nibFilesURLs = [URL]()
for case let fileURL as URL in enumerator {
if fileURL.pathExtension.uppercased() == "NIB" {
nibFilesURLs.append(fileURL)
}
}
nibFilesURLs.map { $0.lastPathComponent }
.compactMap { $0.split(separator: ".").first }
.map { String($0) }
.forEach {
let associatedClass: AnyClass? = bundle.classNamed("\(bundleName).\($0)")
XCTAssertNotNil(associatedClass, "File \"\($0).nib\" has no associated class", file: file, line: line)
}
}
XCTAssertClassHasNib(MyFirstTableViewCell.self, bundle: Bundle(for: AppDelegate.self))
XCTAssertClassHasNib(MySecondTableViewCell.self, bundle: Bundle(for: AppDelegate.self))
XCTAssertNibHaveClasses(Bundle(for: AppDelegate.self))
Many thanks Benjamin Lavialle for coming up with the idea behind the second test predicate.
Seasoned Swift developers know it: a protocol with associated type (PAT) "can only be used as a generic constraint because it has Self or associated type requirements". When we really need to use a PAT to type a variable, the goto workaround is to use a type-erased wrapper.
While this solution works perfectly, it requires a fair amount of boilerplate code. In instances where we are only interested in exposing one particular function of the PAT, a shorter approach using function types is possible.
import Foundation
import UIKit
protocol Configurable {
associatedtype Model
func configure(with model: Model)
}
typealias Configurator<Model> = (Model) -> ()
extension UILabel: Configurable {
func configure(with model: String) {
self.text = model
}
}
let label = UILabel()
let configurator: Configurator<String> = label.configure
configurator("Foo")
label.text // "Foo"
UIKit
exposes a very powerful and simple API to perform view animations. However, this API can become a little bit quirky to use when we want to perform animations sequentially, because it involves nesting closure within one another, which produces notoriously hard to maintain code.
Nonetheless, it's possible to define a rather simple class, that will expose a really nicer API for this particular use case 👌
import Foundation
import UIKit
class AnimationSequence {
typealias Animations = () -> Void
private let current: Animations
private let duration: TimeInterval
private var next: AnimationSequence? = nil
init(animations: @escaping Animations, duration: TimeInterval) {
self.current = animations
self.duration = duration
}
@discardableResult func append(animations: @escaping Animations, duration: TimeInterval) -> AnimationSequence {
var lastAnimation = self
while let nextAnimation = lastAnimation.next {
lastAnimation = nextAnimation
}
lastAnimation.next = AnimationSequence(animations: animations, duration: duration)
return self
}
func run() {
UIView.animate(withDuration: duration, animations: current, completion: { finished in
if finished, let next = self.next {
next.run()
}
})
}
}
var firstView = UIView()
var secondView = UIView()
firstView.alpha = 0
secondView.alpha = 0
AnimationSequence(animations: { firstView.alpha = 1.0 }, duration: 1)
.append(animations: { secondView.alpha = 1.0 }, duration: 0.5)
.append(animations: { firstView.alpha = 0.0 }, duration: 2.0)
.run()
Debouncing is a very useful tool when dealing with UI inputs. Consider a search bar, whose content is used to query an API. It wouldn't make sense to perform a request for every character the user is typing, because as soon as a new character is entered, the result of the previous request has become irrelevant.
Instead, our code will perform much better if we "debounce" the API call, meaning that we will wait until some delay has passed, without the input being modified, before actually performing the call.
import Foundation
func debounced(delay: TimeInterval, queue: DispatchQueue = .main, action: @escaping (() -> Void)) -> () -> Void {
var workItem: DispatchWorkItem?
return {
workItem?.cancel()
workItem = DispatchWorkItem(block: action)
queue.asyncAfter(deadline: .now() + delay, execute: workItem!)
}
}
let debouncedPrint = debounced(delay: 1.0) { print("Action performed!") }
debouncedPrint()
debouncedPrint()
debouncedPrint()
// After a 1 second delay, this gets
// printed only once to the console:
// Action performed!
Optional
booleansWhen we need to apply the standard boolean operators to Optional
booleans, we often end up with a syntax unnecessarily crowded with unwrapping operations. By taking a cue from the world of three-valued logics, we can define a couple operators that make working with Bool?
values much nicer.
import Foundation
func && (lhs: Bool?, rhs: Bool?) -> Bool? {
switch (lhs, rhs) {
case (false, _), (_, false):
return false
case let (unwrapLhs?, unwrapRhs?):
return unwrapLhs && unwrapRhs
default:
return nil
}
}
func || (lhs: Bool?, rhs: Bool?) -> Bool? {
switch (lhs, rhs) {
case (true, _), (_, true):
return true
case let (unwrapLhs?, unwrapRhs?):
return unwrapLhs || unwrapRhs
default:
return nil
}
}
false && nil // false
true && nil // nil
[true, nil, false].reduce(true, &&) // false
nil || true // true
nil || false // nil
[true, nil, false].reduce(false, ||) // true
Sequence
Transforming a Sequence
in order to remove all the duplicate values it contains is a classic use case. To implement it, one could be tempted to transform the Sequence
into a Set
, then back to an Array
. The downside with this approach is that it will not preserve the order of the sequence, which can definitely be a dealbreaker. Using reduce()
it is possible to provide a concise implementation that preserves ordering:
import Foundation
extension Sequence where Element: Equatable {
func duplicatesRemoved() -> [Element] {
return reduce([], { $0.contains($1) ? $0 : $0 + [$1] })
}
}
let data = [2, 5, 2, 3, 6, 5, 2]
data.duplicatesRemoved() // [2, 5, 3, 6]
Optional strings are very common in Swift code, for instance many objects from UIKit
expose the text they display as a String?
. Many times you will need to manipulate this data as an unwrapped String
, with a default value set to the empty string for nil
cases.
While the nil-coalescing operator (e.g. ??
) is a perfectly fine way to a achieve this goal, defining a computed variable like orEmpty
can help a lot in cleaning the syntax.
import Foundation
import UIKit
extension Optional where Wrapped == String {
var orEmpty: String {
switch self {
case .some(let value):
return value
case .none:
return ""
}
}
}
func doesNotWorkWithOptionalString(_ param: String) {
// do something with `param`
}
let label = UILabel()
label.text = "This is some text."
doesNotWorkWithOptionalString(label.text.orEmpty)
Every seasoned iOS developers knows it: objects from UIKit
can only be accessed from the main thread. Any attempt to access them from a background thread is a guaranteed crash.
Still, running a costly computation on the background, and then using it to update the UI can be a common pattern.
In such cases you can rely on asyncUI
to encapsulate all the boilerplate code.
import Foundation
import UIKit
func asyncUI<T>(_ computation: @autoclosure @escaping () -> T, qos: DispatchQoS.QoSClass = .userInitiated, _ completion: @escaping (T) -> Void) {
DispatchQueue.global(qos: qos).async {
let value = computation()
DispatchQueue.main.async {
completion(value)
}
}
}
let label = UILabel()
func costlyComputation() -> Int { return (0..<10_000).reduce(0, +) }
asyncUI(costlyComputation()) { value in
label.text = "\(value)"
}
A debug view, from which any controller of an app can be instantiated and pushed on the navigation stack, has the potential to bring some real value to a development process. A requirement to build such a view is to have a list of all the classes from a given Bundle
that inherit from UIViewController
. With the following extension
, retrieving this list becomes a piece of cake 🍰
import Foundation
import UIKit
import ObjectiveC
extension Bundle {
func viewControllerTypes() -> [UIViewController.Type] {
guard let bundlePath = self.executablePath else { return [] }
var size: UInt32 = 0
var rawClassNames: UnsafeMutablePointer<UnsafePointer<Int8>>!
var parsedClassNames = [String]()
rawClassNames = objc_copyClassNamesForImage(bundlePath, &size)
for index in 0..<size {
let className = rawClassNames[Int(index)]
if let name = NSString.init(utf8String:className) as String?,
NSClassFromString(name) is UIViewController.Type {
parsedClassNames.append(name)
}
}
return parsedClassNames
.sorted()
.compactMap { NSClassFromString($0) as? UIViewController.Type }
}
}
// Fetch all view controller types in UIKit
Bundle(for: UIViewController.self).viewControllerTypes()
I share the credit for this tip with Benoît Caron.
Update As it turns out, map
is actually a really bad name for this function, because it does not preserve composition of transformations, a property that is required to fit the definition of a real map
function.
Surprisingly enough, the standard library doesn't define a map()
function for dictionaries that allows to map both keys
and values
into a new Dictionary
. Nevertheless, such a function can be helpful, for instance when converting data across different frameworks.
import Foundation
extension Dictionary {
func map<T: Hashable, U>(_ transform: (Key, Value) throws -> (T, U)) rethrows -> [T: U] {
var result: [T: U] = [:]
for (key, value) in self {
let (transformedKey, transformedValue) = try transform(key, value)
result[transformedKey] = transformedValue
}
return result
}
}
let data = [0: 5, 1: 6, 2: 7]
data.map { ("\($0)", $1 * $1) } // ["2": 49, "0": 25, "1": 36]
nil
valuesSwift provides the function compactMap()
, that can be used to remove nil
values from a Sequence
of optionals when calling it with an argument that just returns its parameter (i.e. compactMap { $0 }
). Still, for such use cases it would be nice to get rid of the trailing closure.
The implementation isn't as straightforward as your usual extension
, but once it has been written, the call site definitely gets cleaner 👌
import Foundation
protocol OptionalConvertible {
associatedtype Wrapped
func asOptional() -> Wrapped?
}
extension Optional: OptionalConvertible {
func asOptional() -> Wrapped? {
return self
}
}
extension Sequence where Element: OptionalConvertible {
func compacted() -> [Element.Wrapped] {
return compactMap { $0.asOptional() }
}
}
let data = [nil, 1, 2, nil, 3, 5, nil, 8, nil]
data.compacted() // [1, 2, 3, 5, 8]
It might happen that your code has to deal with values that come with an expiration date. In a game, it could be a score multiplier that will only last for 30 seconds. Or it could be an authentication token for an API, with a 15 minutes lifespan. In both instances you can rely on the type Expirable
to encapsulate the expiration logic.
import Foundation
struct Expirable<T> {
private var innerValue: T
private(set) var expirationDate: Date
var value: T? {
return hasExpired() ? nil : innerValue
}
init(value: T, expirationDate: Date) {
self.innerValue = value
self.expirationDate = expirationDate
}
init(value: T, duration: Double) {
self.innerValue = value
self.expirationDate = Date().addingTimeInterval(duration)
}
func hasExpired() -> Bool {
return expirationDate < Date()
}
}
let expirable = Expirable(value: 42, duration: 3)
sleep(2)
expirable.value // 42
sleep(2)
expirable.value // nil
I share the credit for this tip with Benoît Caron.
map()
Almost all Apple devices able to run Swift code are powered by a multi-core CPU, consequently making a good use of parallelism is a great way to improve code performance. map()
is a perfect candidate for such an optimization, because it is almost trivial to define a parallel implementation.
import Foundation
extension Array {
func parallelMap<T>(_ transform: (Element) -> T) -> [T] {
let res = UnsafeMutablePointer<T>.allocate(capacity: count)
DispatchQueue.concurrentPerform(iterations: count) { i in
res[i] = transform(self[i])
}
let finalResult = Array<T>(UnsafeBufferPointer(start: res, count: count))
res.deallocate(capacity: count)
return finalResult
}
}
let array = (0..<1_000).map { $0 }
func work(_ n: Int) -> Int {
return (0..<n).reduce(0, +)
}
array.parallelMap { work($0) }
🚨 Make sure to only use parallelMap()
when the transform
function actually performs some costly computations. Otherwise performances will be systematically slower than using map()
, because of the multithreading overhead.
During development of a feature that performs some heavy computations, it can be helpful to measure just how much time a chunk of code takes to run. The time()
function is a nice tool for this purpose, because of how simple it is to add and then to remove when it is no longer needed.
import Foundation
func time(averagedExecutions: Int = 1, _ code: () -> Void) {
let start = Date()
for _ in 0..<averagedExecutions { code() }
let end = Date()
let duration = end.timeIntervalSince(start) / Double(averagedExecutions)
print("time: \(duration)")
}
time {
(0...10_000).map { $0 * $0 }
}
// time: 0.183973908424377
Concurrency is definitely one of those topics were the right encapsulation bears the potential to make your life so much easier. For instance, with this piece of code you can easily launch two computations in parallel, and have the results returned in a tuple.
import Foundation
func parallel<T, U>(_ left: @autoclosure () -> T, _ right: @autoclosure () -> U) -> (T, U) {
var leftRes: T?
var rightRes: U?
DispatchQueue.concurrentPerform(iterations: 2, execute: { id in
if id == 0 {
leftRes = left()
} else {
rightRes = right()
}
})
return (leftRes!, rightRes!)
}
let values = (1...100_000).map { $0 }
let results = parallel(values.map { $0 * $0 }, values.reduce(0, +))
Swift exposes three special variables #file
, #line
and #function
, that are respectively set to the name of the current file, line and function. Those variables become very useful when writing custom logging functions or test predicates.
import Foundation
func log(_ message: String, _ file: String = #file, _ line: Int = #line, _ function: String = #function) {
print("[\(file):\(line)] \(function) - \(message)")
}
func foo() {
log("Hello world!")
}
foo() // [MyPlayground.playground:8] foo() - Hello world!
Swift 4.1 has introduced a new feature called Conditional Conformance, which allows a type to implement a protocol only when its generic type also does.
With this addition it becomes easy to let Optional
implement Comparable
only when Wrapped
also implements Comparable
:
import Foundation
extension Optional: Comparable where Wrapped: Comparable {
public static func < (lhs: Optional, rhs: Optional) -> Bool {
switch (lhs, rhs) {
case let (lhs?, rhs?):
return lhs < rhs
case (nil, _?):
return true // anything is greater than nil
case (_?, nil):
return false // nil in smaller than anything
case (nil, nil):
return true // nil is not smaller than itself
}
}
}
let data: [Int?] = [8, 4, 3, nil, 12, 4, 2, nil, -5]
data.sorted() // [nil, nil, Optional(-5), Optional(2), Optional(3), Optional(4), Optional(4), Optional(8), Optional(12)]
Any attempt to access an Array
beyond its bounds will result in a crash. While it's possible to write conditions such as if index < array.count { array[index] }
in order to prevent such crashes, this approach will rapidly become cumbersome.
A great thing is that this condition can be encapsulated in a custom subscript
that will work on any Collection
:
import Foundation
extension Collection {
subscript (safe index: Index) -> Element? {
return indices.contains(index) ? self[index] : nil
}
}
let data = [1, 3, 4]
data[safe: 1] // Optional(3)
data[safe: 10] // nil
Subscripting a string with a range can be very cumbersome in Swift 4. Let's face it, no one wants to write lines like someString[index(startIndex, offsetBy: 0)..<index(startIndex, offsetBy: 10)]
on a regular basis.
Luckily, with the addition of one clever extension, strings can be sliced as easily as arrays 🎉
import Foundation
extension String {
public subscript(value: CountableClosedRange<Int>) -> Substring {
get {
return self[index(startIndex, offsetBy: value.lowerBound)...index(startIndex, offsetBy: value.upperBound)]
}
}
public subscript(value: CountableRange<Int>) -> Substring {
get {
return self[index(startIndex, offsetBy: value.lowerBound)..<index(startIndex, offsetBy: value.upperBound)]
}
}
public subscript(value: PartialRangeUpTo<Int>) -> Substring {
get {
return self[..<index(startIndex, offsetBy: value.upperBound)]
}
}
public subscript(value: PartialRangeThrough<Int>) -> Substring {
get {
return self[...index(startIndex, offsetBy: value.upperBound)]
}
}
public subscript(value: PartialRangeFrom<Int>) -> Substring {
get {
return self[index(startIndex, offsetBy: value.lowerBound)...]
}
}
}
let data = "This is a string!"
data[..<4] // "This"
data[5..<9] // "is a"
data[10...] // "string!"
By using a KeyPath
along with a generic type, a very clean and concise syntax for sorting data can be implemented:
import Foundation
extension Sequence {
func sorted<T: Comparable>(by attribute: KeyPath<Element, T>) -> [Element] {
return sorted(by: { $0[keyPath: attribute] < $1[keyPath: attribute] })
}
}
let data = ["Some", "words", "of", "different", "lengths"]
data.sorted(by: \.count) // ["of", "Some", "words", "lengths", "different"]
If you like this syntax, make sure to checkout KeyPathKit!
By capturing a local variable in a returned closure, it is possible to manufacture cache-efficient versions of pure functions. Be careful though, this trick only works with non-recursive function!
import Foundation
func cached<In: Hashable, Out>(_ f: @escaping (In) -> Out) -> (In) -> Out {
var cache = [In: Out]()
return { (input: In) -> Out in
if let cachedValue = cache[input] {
return cachedValue
} else {
let result = f(input)
cache[input] = result
return result
}
}
}
let cachedCos = cached { (x: Double) in cos(x) }
cachedCos(.pi * 2) // value of cos for 2π is now cached
When distinguishing between complex boolean conditions, using a switch
statement along with pattern matching can be more readable than the classic series of if {} else if {}
.
import Foundation
let expr1: Bool
let expr2: Bool
let expr3: Bool
if expr1 && !expr3 {
functionA()
} else if !expr2 && expr3 {
functionB()
} else if expr1 && !expr2 && expr3 {
functionC()
}
switch (expr1, expr2, expr3) {
case (true, _, false):
functionA()
case (_, false, true):
functionB()
case (true, false, true):
functionC()
default:
break
}
Using map()
on a range makes it easy to generate an array of data.
import Foundation
func randomInt() -> Int { return Int(arc4random()) }
let randomArray = (1...10).map { _ in randomInt() }
Using @autoclosure
enables the compiler to automatically wrap an argument within a closure, thus allowing for a very clean syntax at call sites.
import UIKit
extension UIView {
class func animate(withDuration duration: TimeInterval, _ animations: @escaping @autoclosure () -> Void) {
UIView.animate(withDuration: duration, animations: animations)
}
}
let view = UIView()
UIView.animate(withDuration: 0.3, view.backgroundColor = .orange)
When working with RxSwift, it's very easy to observe both the current and previous value of an observable sequence by simply introducing a shift using skip()
.
import RxSwift
let values = Observable.of(4, 8, 15, 16, 23, 42)
let newAndOld = Observable.zip(values, values.skip(1)) { (previous: $0, current: $1) }
.subscribe(onNext: { pair in
print("current: \(pair.current) - previous: \(pair.previous)")
})
//current: 8 - previous: 4
//current: 15 - previous: 8
//current: 16 - previous: 15
//current: 23 - previous: 16
//current: 42 - previous: 23
Using protocols such as ExpressibleByStringLiteral
it is possible to provide an init
that will be automatically when a literal value is provided, allowing for nice and short syntax. This can be very helpful when writing mock or test data.
import Foundation
extension URL: ExpressibleByStringLiteral {
public init(stringLiteral value: String) {
self.init(string: value)!
}
}
let url: URL = "http://www.google.fr"
NSURLConnection.canHandle(URLRequest(url: "http://www.google.fr"))
Through some clever use of Swift private
visibility it is possible to define a container that holds any untrusted value (such as a user input) from which the only way to retrieve the value is by making it successfully pass a validation test.
import Foundation
struct Untrusted<T> {
private(set) var value: T
}
protocol Validator {
associatedtype T
static func validation(value: T) -> Bool
}
extension Validator {
static func validate(untrusted: Untrusted<T>) -> T? {
if self.validation(value: untrusted.value) {
return untrusted.value
} else {
return nil
}
}
}
struct FrenchPhoneNumberValidator: Validator {
static func validation(value: String) -> Bool {
return (value.count) == 10 && CharacterSet(charactersIn: value).isSubset(of: CharacterSet.decimalDigits)
}
}
let validInput = Untrusted(value: "0122334455")
let invalidInput = Untrusted(value: "0123")
FrenchPhoneNumberValidator.validate(untrusted: validInput) // returns "0122334455"
FrenchPhoneNumberValidator.validate(untrusted: invalidInput) // returns nil
With the addition of keypaths in Swift 4, it is now possible to easily implement the builder pattern, that allows the developer to clearly separate the code that initializes a value from the code that uses it, without the burden of defining a factory method.
import UIKit
protocol With {}
extension With where Self: AnyObject {
@discardableResult
func with<T>(_ property: ReferenceWritableKeyPath<Self, T>, setTo value: T) -> Self {
self[keyPath: property] = value
return self
}
}
extension UIView: With {}
let view = UIView()
let label = UILabel()
.with(\.textColor, setTo: .red)
.with(\.text, setTo: "Foo")
.with(\.textAlignment, setTo: .right)
.with(\.layer.cornerRadius, setTo: 5)
view.addSubview(label)
🚨 The Swift compiler does not perform OS availability checks on properties referenced by keypaths. Any attempt to use a KeyPath
for an unavailable property will result in a runtime crash.
I share the credit for this tip with Marion Curtil.
When a type stores values for the sole purpose of parametrizing its functions, it’s then possible to not store the values but directly the function, with no discernable difference at the call site.
import Foundation
struct MaxValidator {
let max: Int
let strictComparison: Bool
func isValid(_ value: Int) -> Bool {
return self.strictComparison ? value < self.max : value <= self.max
}
}
struct MaxValidator2 {
var isValid: (_ value: Int) -> Bool
init(max: Int, strictComparison: Bool) {
self.isValid = strictComparison ? { $0 < max } : { $0 <= max }
}
}
MaxValidator(max: 5, strictComparison: true).isValid(5) // false
MaxValidator2(max: 5, strictComparison: false).isValid(5) // true
Functions are first-class citizen types in Swift, so it is perfectly legal to define operators for them.
import Foundation
let firstRange = { (0...3).contains($0) }
let secondRange = { (5...6).contains($0) }
func ||(_ lhs: @escaping (Int) -> Bool, _ rhs: @escaping (Int) -> Bool) -> (Int) -> Bool {
return { value in
return lhs(value) || rhs(value)
}
}
(firstRange || secondRange)(2) // true
(firstRange || secondRange)(4) // false
(firstRange || secondRange)(6) // true
Typealiases are great to express function signatures in a more comprehensive manner, which then enables us to easily define functions that operate on them, resulting in a nice way to write and use some powerful API.
import Foundation
typealias RangeSet = (Int) -> Bool
func union(_ left: @escaping RangeSet, _ right: @escaping RangeSet) -> RangeSet {
return { left($0) || right($0) }
}
let firstRange = { (0...3).contains($0) }
let secondRange = { (5...6).contains($0) }
let unionRange = union(firstRange, secondRange)
unionRange(2) // true
unionRange(4) // false
By returning a closure that captures a local variable, it's possible to encapsulate a mutable state within a function.
import Foundation
func counterFactory() -> () -> Int {
var counter = 0
return {
counter += 1
return counter
}
}
let counter = counterFactory()
counter() // returns 1
counter() // returns 2
⚠️ Since Swift 4.2,
allCases
can now be synthesized at compile-time by simply conforming to the protocolCaseIterable
. The implementation below should no longer be used in production code.
Through some clever leveraging of how enums are stored in memory, it is possible to generate an array that contains all the possible cases of an enum. This can prove particularly useful when writing unit tests that consume random data.
import Foundation
enum MyEnum { case first; case second; case third; case fourth }
protocol EnumCollection: Hashable {
static var allCases: [Self] { get }
}
extension EnumCollection {
public static var allCases: [Self] {
var i = 0
return Array(AnyIterator {
let next = withUnsafePointer(to: &i) {
$0.withMemoryRebound(to: Self.self, capacity: 1) { $0.pointee }
}
if next.hashValue != i { return nil }
i += 1
return next
})
}
}
extension MyEnum: EnumCollection { }
MyEnum.allCases // [.first, .second, .third, .fourth]
The if-let syntax is a great way to deal with optional values in a safe manner, but at times it can prove to be just a little bit to cumbersome. In such cases, using the Optional.map()
function is a nice way to achieve a shorter code while retaining safeness and readability.
import UIKit
let date: Date? = Date() // or could be nil, doesn't matter
let formatter = DateFormatter()
let label = UILabel()
if let safeDate = date {
label.text = formatter.string(from: safeDate)
}
label.text = date.map { return formatter.string(from: $0) }
label.text = date.map(formatter.string(from:)) // even shorter, tough less readable
📣 NEW 📣 Swift Tips are now available on YouTube 👇
Summary
String
interpolationstructs
NSAttributedString
through a Function Builderswitch
and if
as expressionsguard
statementsinit
without loosing the compiler-generated oneenum
Never
to represent impossible code pathsDecodable
enum
[weak self]
and guard
userInfo
Dictionary
typealias
to its fullestforEach
reduce()
Optional
booleansSequence
nil
valuesmap()
Tips
Author: vincent-pradeilles
Source code: https://github.com/vincent-pradeilles/swift-tips
License: MIT license
#swift
1620466520
If you accumulate data on which you base your decision-making as an organization, you should probably think about your data architecture and possible best practices.
If you accumulate data on which you base your decision-making as an organization, you most probably need to think about your data architecture and consider possible best practices. Gaining a competitive edge, remaining customer-centric to the greatest extent possible, and streamlining processes to get on-the-button outcomes can all be traced back to an organization’s capacity to build a future-ready data architecture.
In what follows, we offer a short overview of the overarching capabilities of data architecture. These include user-centricity, elasticity, robustness, and the capacity to ensure the seamless flow of data at all times. Added to these are automation enablement, plus security and data governance considerations. These points from our checklist for what we perceive to be an anticipatory analytics ecosystem.
#big data #data science #big data analytics #data analysis #data architecture #data transformation #data platform #data strategy #cloud data platform #data acquisition