1662944966
https://windowsground.com/how-to-fix-cefsharp-core-runtime-dll-error-could-not-load-file-or-assembly-in-windows-10-or-11/
1659817260
The AWS IoT Device SDK for Embedded C (C-SDK) is a collection of C source files under the MIT open source license that can be used in embedded applications to securely connect IoT devices to AWS IoT Core. It contains MQTT client, HTTP client, JSON Parser, AWS IoT Device Shadow, AWS IoT Jobs, and AWS IoT Device Defender libraries. This SDK is distributed in source form, and can be built into customer firmware along with application code, other libraries and an operating system (OS) of your choice. These libraries are only dependent on standard C libraries, so they can be ported to various OS's - from embedded Real Time Operating Systems (RTOS) to Linux/Mac/Windows. You can find sample usage of C-SDK libraries on POSIX systems using OpenSSL (e.g. Linux demos in this repository), and on FreeRTOS using mbedTLS (e.g. FreeRTOS demos in FreeRTOS repository).
For the latest release of C-SDK, please see the section for Releases and Documentation.
C-SDK includes libraries that are part of the FreeRTOS 202012.01 LTS release. Learn more about the FreeRTOS 202012.01 LTS libraries by clicking here.
The C-SDK libraries are licensed under the MIT open source license.
C-SDK simplifies access to various AWS IoT services. C-SDK has been tested to work with AWS IoT Core and an open source MQTT broker to ensure interoperability. The AWS IoT Device Shadow, AWS IoT Jobs, and AWS IoT Device Defender libraries are flexible to work with any MQTT client and JSON parser. The MQTT client and JSON parser libraries are offered as choices without being tightly coupled with the rest of the SDK. C-SDK contains the following libraries:
The coreMQTT library provides the ability to establish an MQTT connection with a broker over a customer-implemented transport layer, which can either be a secure channel like a TLS session (mutually authenticated or server-only authentication) or a non-secure channel like a plaintext TCP connection. This MQTT connection can be used for performing publish operations to MQTT topics and subscribing to MQTT topics. The library provides a mechanism to register customer-defined callbacks for receiving incoming PUBLISH, acknowledgement and keep-alive response events from the broker. The library has been refactored for memory optimization and is compliant with the MQTT 3.1.1 standard. It has no dependencies on any additional libraries other than the standard C library, a customer-implemented network transport interface, and optionally a customer-implemented platform time function. The refactored design embraces different use-cases, ranging from resource-constrained platforms using only QoS 0 MQTT PUBLISH messages to resource-rich platforms using QoS 2 MQTT PUBLISH over TLS connections.
See memory requirements for the latest release here.
The coreHTTP library provides the ability to establish an HTTP connection with a server over a customer-implemented transport layer, which can either be a secure channel like a TLS session (mutually authenticated or server-only authentication) or a non-secure channel like a plaintext TCP connection. The HTTP connection can be used to make "GET" (include range requests), "PUT", "POST" and "HEAD" requests. The library provides a mechanism to register a customer-defined callback for receiving parsed header fields in an HTTP response. The library has been refactored for memory optimization, and is a client implementation of a subset of the HTTP/1.1 standard.
See memory requirements for the latest release here.
The coreJSON library is a JSON parser that strictly enforces the ECMA-404 JSON standard. It provides a function to validate a JSON document, and a function to search for a key and return its value. A search can descend into nested structures using a compound query key. A JSON document validation also checks for illegal UTF8 encodings and illegal Unicode escape sequences.
See memory requirements for the latest release here.
The corePKCS11 library is an implementation of the PKCS #11 interface (API) that makes it easier to develop applications that rely on cryptographic operations. Only a subset of the PKCS #11 v2.4 standard has been implemented, with a focus on operations involving asymmetric keys, random number generation, and hashing.
The Cryptoki or PKCS #11 standard defines a platform-independent API to manage and use cryptographic tokens. The name, "PKCS #11", is used interchangeably to refer to the API itself and the standard which defines it.
The PKCS #11 API is useful for writing software without taking a dependency on any particular implementation or hardware. By writing against the PKCS #11 standard interface, code can be used interchangeably with multiple algorithms, implementations and hardware.
Generally vendors for secure cryptoprocessors such as Trusted Platform Module (TPM), Hardware Security Module (HSM), Secure Element, or any other type of secure hardware enclave, distribute a PKCS #11 implementation with the hardware. The purpose of corePKCS11 mock is therefore to provide a PKCS #11 implementation that allows for rapid prototyping and development before switching to a cryptoprocessor specific PKCS #11 implementation in production devices.
Since the PKCS #11 interface is defined as part of the PKCS #11 specification replacing corePKCS11 with another implementation should require little porting effort, as the interface will not change. The system tests distributed in corePKCS11 repository can be leveraged to verify the behavior of a different implementation is similar to corePKCS11.
See memory requirements for the latest release here.
The AWS IoT Device Shadow library enables you to store and retrieve the current state one or more shadows of every registered device. A device’s shadow is a persistent, virtual representation of your device that you can interact with from AWS IoT Core even if the device is offline. The device state is captured in its "shadow" is represented as a JSON document. The device can send commands over MQTT to get, update and delete its latest state as well as receive notifications over MQTT about changes in its state. The device’s shadow(s) are uniquely identified by the name of the corresponding "thing", a representation of a specific device or logical entity on the AWS Cloud. See Managing Devices with AWS IoT for more information on IoT "thing". This library supports named shadows, a feature of the AWS IoT Device Shadow service that allows you to create multiple shadows for a single IoT device. More details about AWS IoT Device Shadow can be found in AWS IoT documentation.
The AWS IoT Device Shadow library has no dependencies on additional libraries other than the standard C library. It also doesn’t have any platform dependencies, such as threading or synchronization. It can be used with any MQTT library and any JSON library (see demos with coreMQTT and coreJSON).
See memory requirements for the latest release here.
The AWS IoT Jobs library enables you to interact with the AWS IoT Jobs service which notifies one or more connected devices of a pending “Job”. A Job can be used to manage your fleet of devices, update firmware and security certificates on your devices, or perform administrative tasks such as restarting devices and performing diagnostics. For documentation of the service, please see the AWS IoT Developer Guide. Interactions with the Jobs service use the MQTT protocol. This library provides an API to compose and recognize the MQTT topic strings used by the Jobs service.
The AWS IoT Jobs library has no dependencies on additional libraries other than the standard C library. It also doesn’t have any platform dependencies, such as threading or synchronization. It can be used with any MQTT library and any JSON library (see demos with libmosquitto and coreJSON).
See memory requirements for the latest release here.
The AWS IoT Device Defender library enables you to interact with the AWS IoT Device Defender service to continuously monitor security metrics from devices for deviations from what you have defined as appropriate behavior for each device. If something doesn’t look right, AWS IoT Device Defender sends out an alert so you can take action to remediate the issue. More details about Device Defender can be found in AWS IoT Device Defender documentation. This library supports custom metrics, a feature that helps you monitor operational health metrics that are unique to your fleet or use case. For example, you can define a new metric to monitor the memory usage or CPU usage on your devices.
The AWS IoT Device Defender library has no dependencies on additional libraries other than the standard C library. It also doesn’t have any platform dependencies, such as threading or synchronization. It can be used with any MQTT library and any JSON library (see demos with coreMQTT and coreJSON).
See memory requirements for the latest release here.
The AWS IoT Over-the-air Update (OTA) library enables you to manage the notification of a newly available update, download the update, and perform cryptographic verification of the firmware update. Using the OTA library, you can logically separate firmware updates from the application running on your devices. You can also use the library to send other files (e.g. images, certificates) to one or more devices registered with AWS IoT. More details about OTA library can be found in AWS IoT Over-the-air Update documentation.
The AWS IoT Over-the-air Update library has a dependency on coreJSON for parsing of JSON job document and tinyCBOR for decoding encoded data streams, other than the standard C library. It can be used with any MQTT library, HTTP library, and operating system (e.g. Linux, FreeRTOS) (see demos with coreMQTT and coreHTTP over Linux).
See memory requirements for the latest release here.
The AWS IoT Fleet Provisioning library enables you to interact with the AWS IoT Fleet Provisioning MQTT APIs in order to provison IoT devices without preexisting device certificates. With AWS IoT Fleet Provisioning, devices can securely receive unique device certificates from AWS IoT when they connect for the first time. For an overview of all provisioning options offered by AWS IoT, see device provisioning documentation. For details about Fleet Provisioning, refer to the AWS IoT Fleet Provisioning documentation.
See memory requirements for the latest release here.
The AWS SigV4 library enables you to sign HTTP requests with Signature Version 4 Signing Process. Signature Version 4 (SigV4) is the process to add authentication information to HTTP requests to AWS services. For security, most requests to AWS must be signed with an access key. The access key consists of an access key ID and secret access key.
See memory requirements for the latest release here.
The backoffAlgorithm library is a utility library to calculate backoff period using an exponential backoff with jitter algorithm for retrying network operations (like failed network connection with server). This library uses the "Full Jitter" strategy for the exponential backoff with jitter algorithm. More information about the algorithm can be seen in the Exponential Backoff and Jitter AWS blog.
Exponential backoff with jitter is typically used when retrying a failed connection or network request to the server. An exponential backoff with jitter helps to mitigate the failed network operations with servers, that are caused due to network congestion or high load on the server, by spreading out retry requests across multiple devices attempting network operations. Besides, in an environment with poor connectivity, a client can get disconnected at any time. A backoff strategy helps the client to conserve battery by not repeatedly attempting reconnections when they are unlikely to succeed.
The backoffAlgorithm library has no dependencies on libraries other than the standard C library.
See memory requirements for the latest release here.
When establishing a connection with AWS IoT, users can optionally report the Operating System, Hardware Platform and MQTT client version information of their device to AWS. This information can help AWS IoT provide faster issue resolution and technical support. If users want to report this information, they can send a specially formatted string (see below) in the username field of the MQTT CONNECT packet.
Format
The format of the username string with metrics is:
<Actual_Username>?SDK=<OS_Name>&Version=<OS_Version>&Platform=<Hardware_Platform>&MQTTLib=<MQTT_Library_name>@<MQTT_Library_version>
Where
Example
/* Username string:
* iotuser?SDK=Ubuntu&Version=20.10&Platform=RaspberryPi&MQTTLib=coremqtt@1.1.0
*/
#define OS_NAME "Ubuntu"
#define OS_VERSION "20.10"
#define HARDWARE_PLATFORM_NAME "RaspberryPi"
#define MQTT_LIB "coremqtt@1.1.0"
#define USERNAME_STRING "iotuser?SDK=" OS_NAME "&Version=" OS_VERSION "&Platform=" HARDWARE_PLATFORM_NAME "&MQTTLib=" MQTT_LIB
#define USERNAME_STRING_LENGTH ( ( uint16_t ) ( sizeof( USERNAME_STRING ) - 1 ) )
MQTTConnectInfo_t connectInfo;
connectInfo.pUserName = USERNAME_STRING;
connectInfo.userNameLength = USERNAME_STRING_LENGTH;
mqttStatus = MQTT_Connect( pMqttContext, &connectInfo, NULL, CONNACK_RECV_TIMEOUT_MS, pSessionPresent );
C-SDK releases will now follow a date based versioning scheme with the format YYYYMM.NN, where:
For example, a second release in June 2021 would be 202106.01. Although the SDK releases have moved to date-based versioning, each library within the SDK will still retain semantic versioning. In semantic versioning, the version number itself (X.Y.Z) indicates whether the release is a major, minor, or point release. You can use the semantic version of a library to assess the scope and impact of a new release on your application.
All of the released versions of the C-SDK libraries are available as git tags. For example, the last release of the v3 SDK version is available at tag 3.1.2.
API documentation of 202108.00 release
This release introduces the refactored AWS IoT Fleet Provisioning library and the new AWS SigV4 library.
Additionally, this release brings minor version updates in the AWS IoT Over-the-Air Update and corePKCS11 libraries.
API documentation of 202103.00 release
This release includes a major update to the APIs of the AWS IoT Over-the-air Update library.
Additionally, AWS IoT Device Shadow library introduces a minor update by adding support for named shadow, a feature of the AWS IoT Device Shadow service that allows you to create multiple shadows for a single IoT device. AWS IoT Jobs library introduces a minor update by introducing macros for $next
job ID and compile-time generation of topic strings. AWS IoT Device Defender library introduces a minor update that adds macros to API for custom metrics feature of AWS IoT Device Defender service.
corePKCS11 also introduces a patch update by removing the pkcs11configPAL_DESTROY_SUPPORTED
config and mbedTLS platform abstraction layer of DestroyObject
. Lastly, no code changes are introduced for backoffAlgorithm, coreHTTP, coreMQTT, and coreJSON; however, patch updates are made to improve documentation and CI.
API documentation of 202012.01 release
This release includes AWS IoT Over-the-air Update(Release Candidate), backoffAlgorithm, and PKCS #11 libraries. Additionally, there is a major update to the coreJSON and coreHTTP APIs. All libraries continue to undergo code quality checks (e.g. MISRA-C compliance), and Coverity static analysis. In addition, all libraries except AWS IoT Over-the-air Update and backoffAlgorithm undergo validation of memory safety with the C Bounded Model Checker (CBMC) automated reasoning tool.
API documentation of 202011.00 release
This release includes refactored HTTP client, AWS IoT Device Defender, and AWS IoT Jobs libraries. Additionally, there is a major update to the coreJSON API. All libraries continue to undergo code quality checks (e.g. MISRA-C compliance), Coverity static analysis, and validation of memory safety with the C Bounded Model Checker (CBMC) automated reasoning tool.
API documentation of 202009.00 release
This release includes refactored MQTT, JSON Parser, and AWS IoT Device Shadow libraries for optimized memory usage and modularity. These libraries are included in the SDK via Git submoduling. These libraries have gone through code quality checks including verification that no function has a GNU Complexity score over 8, and checks against deviations from mandatory rules in the MISRA coding standard. Deviations from the MISRA C:2012 guidelines are documented under MISRA Deviations. These libraries have also undergone both static code analysis from Coverity static analysis, and validation of memory safety and data structure invariance through the CBMC automated reasoning tool.
If you are upgrading from v3.x API of the C-SDK to the 202009.00 release, please refer to Migration guide from v3.1.2 to 202009.00 and newer releases. If you are using the C-SDK v4_beta_deprecated branch, note that we will continue to maintain this branch for critical bug fixes and security patches but will not add new features to it. See the C-SDK v4_beta_deprecated branch README for additional details.
Details available here.
All libraries depend on the ISO C90 standard library and additionally on the stdint.h
library for fixed-width integers, including uint8_t
, int8_t
, uint16_t
, uint32_t
and int32_t
, and constant macros like UINT16_MAX
. If your platform does not support the stdint.h
library, definitions of the mentioned fixed-width integer types will be required for porting any C-SDK library to your platform.
Guide for porting coreMQTT library to your platform is available here.
Guide for porting coreHTTP library is available here.
Guide for porting AWS IoT Device Shadow library is available here.
Guide for porting AWS IoT Device Defender library is available here.
Guide for porting OTA library to your platform is available here.
Migration guide for MQTT library is available here.
Migration guide for Shadow library is available here.
Migration guide for Jobs library is available here.
The main branch hosts the continuous development of the AWS IoT Embedded C SDK (C-SDK) libraries. Please be aware that the development at the tip of the main branch is continuously in progress, and may have bugs. Consider using the tagged releases of the C-SDK for production ready software.
The v4_beta_deprecated branch contains a beta version of the C-SDK libraries, which is now deprecated. This branch was earlier named as v4_beta, and was renamed to v4_beta_deprecated. The libraries in this branch will not be released. However, critical bugs will be fixed and tested. No new features will be added to this branch.
This repository uses Git Submodules to bring in the C-SDK libraries (eg, MQTT ) and third-party dependencies (eg, mbedtls for POSIX platform transport layer). Note: If you download the ZIP file provided by GitHub UI, you will not get the contents of the submodules (The ZIP file is also not a valid git repository). If you download from the 202012.00 Release Page page, you will get the entire repository (including the submodules) in the ZIP file, aws-iot-device-sdk-embedded-c-202012.00.zip. To clone the latest commit to main branch using HTTPS:
git clone --recurse-submodules https://github.com/aws/aws-iot-device-sdk-embedded-C.git
Using SSH:
git clone --recurse-submodules git@github.com:aws/aws-iot-device-sdk-embedded-C.git
If you have downloaded the repo without using the --recurse-submodules
argument, you need to run:
git submodule update --init --recursive
When building with CMake, submodules are also recursively cloned automatically. However, -DBUILD_CLONE_SUBMODULES=0
can be passed as a CMake flag to disable this functionality. This is useful when you'd like to build CMake while using a different commit from a submodule.
The libraries in this SDK are not dependent on any operating system. However, the demos for the libraries in this SDK are built and tested on a Linux platform. The demos build with CMake, a cross-platform build tool.
stdint.h
is required for fixed-width integer types that include uint8_t
, int8_t
, uint16_t
, uint32_t
and int32_t
, and constant macros like UINT16_MAX
, while stdbool.h
is required for boolean parameters in coreMQTT. For compilers that do not provide these header files, coreMQTT provides the files stdint.readme and stdbool.readme, which can be renamed to stdint.h
and stdbool.h
, respectively, to provide the required type definitions.Build Dependencies
The follow table shows libraries that need to be installed in your system to run certain demos. If a dependency is not installed and cannot be built from source, demos that require that dependency will be excluded from the default all
target.
Dependency | Version | Usage |
---|---|---|
OpenSSL | 1.1.0 or later | All TLS demos and tests with the exception of PKCS11 |
Mosquitto Client | 1.4.10 or later | AWS IoT Jobs Mosquitto demo |
You need to setup an AWS account and access the AWS IoT console for running the AWS IoT Device Shadow library, AWS IoT Device Defender library, AWS IoT Jobs library, AWS IoT OTA library and coreHTTP S3 download demos. Also, the AWS account can be used for running the MQTT mutual auth demo against AWS IoT broker. Note that running the AWS IoT Device Defender, AWS IoT Jobs and AWS IoT Device Shadow library demos require the setup of a Thing resource for the device running the demo. Follow the links to:
The MQTT Mutual Authentication and AWS IoT Shadow demos include example AWS IoT policy documents to run each respective demo with AWS IoT. You may use the MQTT Mutual auth and Shadow example policies by replacing [AWS_REGION]
and [AWS_ACCOUNT_ID]
with the strings of your region and account identifier. While the IoT Thing name and MQTT client identifier do not need to match for the demos to run, the example policies have the Thing name and client identifier identical as per AWS IoT best practices.
It can be very helpful to also have the AWS Command Line Interface tooling installed.
You can pass the following configuration settings as command line options in order to run the mutual auth demos. Make sure to run the following command in the root directory of the C-SDK:
## optionally find your-aws-iot-endpoint from the command line
aws iot describe-endpoint --endpoint-type iot:Data-ATS
cmake -S . -Bbuild
-DAWS_IOT_ENDPOINT="<your-aws-iot-endpoint>" -DCLIENT_CERT_PATH="<your-client-certificate-path>" -DCLIENT_PRIVATE_KEY_PATH="<your-client-private-key-path>"
In order to set these configurations manually, edit demo_config.h
in demos/mqtt/mqtt_demo_mutual_auth/
and demos/http/http_demo_mutual_auth/
to #define
the following:
AWS_IOT_ENDPOINT
to your custom endpoint. This is found on the Settings page of the AWS IoT Console and has a format of ABCDEFG1234567.iot.<aws-region>.amazonaws.com
where <aws-region>
can be an AWS region like us-east-2
.aws iot describe-endpoint --endpoint-type iot:Data-ATS
.CLIENT_CERT_PATH
to the path of the client certificate downloaded when setting up the device certificate in AWS IoT Account Setup.CLIENT_PRIVATE_KEY_PATH
to the path of the private key downloaded when setting up the device certificate in AWS IoT Account Setup.It is possible to configure ROOT_CA_CERT_PATH
to any PEM-encoded Root CA Certificate. However, this is optional because CMake will download and set it to AmazonRootCA1.pem when unspecified.
To build the AWS IoT Device Defender and AWS IoT Device Shadow demos, you can pass the following configuration settings as command line options. Make sure to run the following command in the root directory of the C-SDK:
cmake -S . -Bbuild -DAWS_IOT_ENDPOINT="<your-aws-iot-endpoint>" -DROOT_CA_CERT_PATH="<your-path-to-amazon-root-ca>" -DCLIENT_CERT_PATH="<your-client-certificate-path>" -DCLIENT_PRIVATE_KEY_PATH="<your-client-private-key-path>" -DTHING_NAME="<your-registered-thing-name>"
An Amazon Root CA certificate can be downloaded from here.
In order to set these configurations manually, edit demo_config.h
in the demo folder to #define
the following:
AWS_IOT_ENDPOINT
to your custom endpoint. This is found on the Settings page of the AWS IoT Console and has a format of ABCDEFG1234567.iot.us-east-2.amazonaws.com
.ROOT_CA_CERT_PATH
to the path of the root CA certificate downloaded when setting up the device certificate in AWS IoT Account Setup.CLIENT_CERT_PATH
to the path of the client certificate downloaded when setting up the device certificate in AWS IoT Account Setup.CLIENT_PRIVATE_KEY_PATH
to the path of the private key downloaded when setting up the device certificate in AWS IoT Account Setup.THING_NAME
to the name of the Thing created in AWS IoT Account Setup.To build the AWS IoT Fleet Provisioning Demo, you can pass the following configuration settings as command line options. Make sure to run the following command in the root directory of the C-SDK:
cmake -S . -Bbuild -DAWS_IOT_ENDPOINT="<your-aws-iot-endpoint>" -DROOT_CA_CERT_PATH="<your-path-to-amazon-root-ca>" -DCLAIM_CERT_PATH="<your-claim-certificate-path>" -DCLAIM_PRIVATE_KEY_PATH="<your-claim-private-key-path>" -DPROVISIONING_TEMPLATE_NAME="<your-template-name>" -DDEVICE_SERIAL_NUMBER="<your-serial-number>"
An Amazon Root CA certificate can be downloaded from here.
To create a provisioning template and claim credentials, sign into your AWS account and visit here. Make sure to enable the "Use the AWS IoT registry to manage your device fleet" option. Once you have created the template and credentials, modify the claim certificate's policy to match the sample policy.
In order to set these configurations manually, edit demo_config.h
in the demo folder to #define
the following:
AWS_IOT_ENDPOINT
to your custom endpoint. This is found on the Settings page of the AWS IoT Console and has a format of ABCDEFG1234567.iot.us-east-2.amazonaws.com
.ROOT_CA_CERT_PATH
to the path of the root CA certificate downloaded when setting up the device certificate in AWS IoT Account Setup.CLAIM_CERT_PATH
to the path of the claim certificate downloaded when setting up the template and claim credentials.CLAIM_PRIVATE_KEY_PATH
to the path of the private key downloaded when setting up the template and claim credentials.PROVISIONING_TEMPLATE_NAME
to the name of the provisioning template created.DEVICE_SERIAL_NUMBER
to an arbitrary string representing a device identifier.You can pass the following configuration settings as command line options in order to run the S3 demos. Make sure to run the following command in the root directory of the C-SDK:
cmake -S . -Bbuild -DS3_PRESIGNED_GET_URL="s3-get-url" -DS3_PRESIGNED_PUT_URL="s3-put-url"
S3_PRESIGNED_PUT_URL
is only needed for the S3 upload demo.
In order to set these configurations manually, edit demo_config.h
in demos/http/http_demo_s3_download_multithreaded
, and demos/http/http_demo_s3_upload
to #define
the following:
S3_PRESIGNED_GET_URL
to a S3 presigned URL with GET access.S3_PRESIGNED_PUT_URL
to a S3 presigned URL with PUT access.You can generate the presigned urls using demos/http/common/src/presigned_urls_gen.py. More info can be found here.
Refer this demos/http/http_demo_s3_download/README.md to follow the steps needed to configure and run the S3 Download HTTP Demo using SigV4 Library that generates the authorization HTTP header needed to authenticate the HTTP requests send to S3.
apt install curl libmosquitto-dev
If the platform does not contain the libmosquitto
library, the demo will build the library from source.
libmosquitto
1.4.10 or any later version of the first major release is required to run this demo.
The following creates a job that specifies a Linux Kernel link for downloading.
aws iot create-job \
--job-id 'job_1' \
--targets arn:aws:iot:us-west-2:<account-id>:thing/<thing-name> \
--document '{"url":"https://cdn.kernel.org/pub/linux/kernel/v5.x/linux-5.8.5.tar.xz"}'
After you build and run the initial executable you will have to create another executable and schedule an OTA update job with this image.
APP_VERSION_BUILD
in demos/ota/ota_demo_core_[mqtt/http]/demo_config.h
to a different version than what is running.build-dir-2
.mv ota_demo_core_mqtt ota_demo_core_mqtt2
build-dir-2/bin/ota_demo2
./home/ubuntu/aws-iot-device-sdk-embedded-C-staging/build-dir/bin/ota_demo_core_mqtt2
.sudo ./ota_demo_core_mqtt
or sudo ./ota_demo_core_http
.chmod 775 ota_demo_core_mqtt2
sudo ./ota_demo_core_mqtt2
Before building the demos, ensure you have installed the prerequisite software. On Ubuntu 18.04 and 20.04, gcc
, cmake
, and OpenSSL can be installed with:
sudo apt install build-essential cmake libssl-dev
cmake -S . -Bbuild && cd build
make help | grep demo
:defender_demo
http_demo_basic_tls
http_demo_mutual_auth
http_demo_plaintext
http_demo_s3_download
http_demo_s3_download_multithreaded
http_demo_s3_upload
jobs_demo_mosquitto
mqtt_demo_basic_tls
mqtt_demo_mutual_auth
mqtt_demo_plaintext
mqtt_demo_serializer
mqtt_demo_subscription_manager
ota_demo_core_http
ota_demo_core_mqtt
pkcs11_demo_management_and_rng
pkcs11_demo_mechanisms_and_digests
pkcs11_demo_objects
pkcs11_demo_sign_and_verify
shadow_demo_main
demo_name
with your desired demo then build it: make demo_name
build/bin
directory and run any demo executables from there.cmake -S . -Bbuild && cd build
make
build/bin
directory and run any demo executables from there.The corePKCS11 demos do not require any AWS IoT resources setup, and are standalone. The demos build upon each other to introduce concepts in PKCS #11 sequentially. Below is the recommended order.
pkcs11_demo_management_and_rng
pkcs11_demo_mechanisms_and_digests
pkcs11_demo_objects
pkcs11_demo_sign_and_verify
pkcs11_demo_objects
to be in the directory the demo is executed from.Install Docker:
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
Installing Mosquitto to run MQTT demos locally
The following instructions have been tested on an Ubuntu 18.04 environment with Docker and OpenSSL installed.
Download the official Docker image for Mosquitto 1.6.14. This version is deliberately chosen so that the Docker container can load certificates from the host system. Any version after 1.6.14 will drop privileges as soon as the configuration file has been read (before TLS certificates are loaded).
docker pull eclipse-mosquitto:1.6.14
If a Mosquitto broker with TLS communication needs to be run, ignore this step and proceed to the next step. A Mosquitto broker with plain text communication can be run by executing the command below.
docker run -it -p 1883:1883 --name mosquitto-plain-text eclipse-mosquitto:1.6.14
Set BROKER_ENDPOINT
defined in demos/mqtt/mqtt_demo_plaintext/demo_config.h
to localhost
.
Ignore the remaining steps unless a Mosquitto broker with TLS communication also needs to be run.
For TLS communication with Mosquitto broker, server and CA credentials need to be created. Use OpenSSL commands to generate the credentials for the Mosquitto server.
# Generate CA key and certificate. Provide the Subject field information as appropriate for CA certificate.
openssl req -x509 -nodes -sha256 -days 365 -newkey rsa:2048 -keyout ca.key -out ca.crt
# Generate server key and certificate.# Provide the Subject field information as appropriate for Server certificate. Make sure the Common Name (CN) field is different from the root CA certificate.
openssl req -nodes -sha256 -new -keyout server.key -out server.csr # Sign with the CA cert.
openssl x509 -req -sha256 -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt -days 365
Note: Make sure to use different Common Name (CN) detail between the CA and server certificates; otherwise, SSL handshake fails with exactly same Common Name (CN) detail in both the certificates.
port 8883
cafile /mosquitto/config/ca.crt
certfile /mosquitto/config/server.crt
keyfile /mosquitto/config/server.key
# Use this option for TLS mutual authentication (where client will provide CA signed certificate)
#require_certificate true
tls_version tlsv1.2
#use_identity_as_username true
Create a mosquitto.conf file to use port 8883 (for TLS communication) and providing path to the generated credentials.
Run the docker container from the local directory containing the generated credential and mosquitto.conf files.
docker run -it -p 8883:8883 -v $(pwd):/mosquitto/config/ --name mosquitto-basic-tls eclipse-mosquitto:1.6.14
Update demos/mqtt/mqtt_demo_basic_tls/demo_config.h
to the following:
Set BROKER_ENDPOINT
to localhost
.
Set ROOT_CA_CERT_PATH
to the absolute path of the CA certificate created in step 4. for the local Mosquitto server.
Installing httpbin to run HTTP demos locally
Run httpbin through port 80:
docker pull kennethreitz/httpbin
docker run -p 80:80 kennethreitz/httpbin
SERVER_HOST
defined in demos/http/http_demo_plaintext/demo_config.h
can now be set to localhost
.
To run http_demo_basic_tls
, download ngrok in order to create an HTTPS tunnel to the httpbin server currently hosted on port 80:
./ngrok http 80 # May have to use ./ngrok.exe depending on OS or filename of the executable
ngrok
will provide an https link that can be substituted in demos/http/http_demo_basic_tls/demo_config.h
and has a format of https://ABCDEFG12345.ngrok.io
.
Set SERVER_HOST
in demos/http/http_demo_basic_tls/demo_config.h
to the https link provided by ngrok, without https://
preceding it.
You must also download the Root CA certificate provided by the ngrok https link and set ROOT_CA_CERT_PATH
in demos/http/http_demo_basic_tls/demo_config.h
to the file path of the downloaded certificate.
The C-SDK libraries and platform abstractions can be installed to a file system through CMake. To do so, run the following command in the root directory of the C-SDK. Note that installation is not required to run any of the demos.
cmake -S . -Bbuild -DBUILD_DEMOS=0 -DBUILD_TESTS=0
cd build
sudo make install
Note that because make install
will automatically build the all
target, it may be useful to disable building demos and tests with -DBUILD_DEMOS=0 -DBUILD_TESTS=0
unless they have already been configured. Super-user permissions may be needed if installing to a system include or system library path.
To install only a subset of all libraries, pass -DINSTALL_LIBS
to install only the libraries you need. By default, all libraries will be installed, but you may exclude any library that you don't need from this list:
-DINSTALL_LIBS="DEFENDER;SHADOW;JOBS;OTA;OTA_HTTP;OTA_MQTT;BACKOFF_ALGORITHM;HTTP;JSON;MQTT;PKCS"
By default, the install path will be in the project
directory of the SDK. You can also set -DINSTALL_TO_SYSTEM=1
to install to the system path for headers and libraries in your OS (e.g. /usr/local/include
& /usr/local/lib
for Linux).
Upon entering make install
, the location of each library will be specified first followed by the location of all installed headers:
-- Installing: /usr/local/lib/libaws_iot_defender.so
-- Installing: /usr/local/lib/libaws_iot_shadow.so
...
-- Installing: /usr/local/include/aws/defender.h
-- Installing: /usr/local/include/aws/defender_config_defaults.h
-- Installing: /usr/local/include/aws/shadow.h
-- Installing: /usr/local/include/aws/shadow_config_defaults.h
You may also set an installation path of your choice by passing the following flags through CMake. Make sure to run the following command in the root directory of the C-SDK:
cmake -S . -Bbuild -DBUILD_DEMOS=0 -DBUILD_TESTS=0 \
-DCSDK_HEADER_INSTALL_PATH="/header/path" -DCSDK_LIB_INSTALL_PATH="/lib/path"
cd build
sudo make install
POSIX platform abstractions are used together with the C-SDK libraries in the demos. By default, these abstractions are also installed but can be excluded by passing the flag: -DINSTALL_PLATFORM_ABSTRACTIONS=0
.
Lastly, a custom config path for any specific library can also be specified through the following CMake flags, allowing libraries to be compiled with a config of your choice:
-DDEFENDER_CUSTOM_CONFIG_DIR="defender-config-directory"
-DSHADOW_CUSTOM_CONFIG_DIR="shadow-config-directory"
-DJOBS_CUSTOM_CONFIG_DIR="jobs-config-directory"
-DOTA_CUSTOM_CONFIG_DIR="ota-config-directory"
-DHTTP_CUSTOM_CONFIG_DIR="http-config-directory"
-DJSON_CUSTOM_CONFIG_DIR="json-config-directory"
-DMQTT_CUSTOM_CONFIG_DIR="mqtt-config-directory"
-DPKCS_CUSTOM_CONFIG_DIR="pkcs-config-directory"
Note that the file name of the header should not be included in the directory.
Note: For pre-generated documentation, please visit Releases and Documentation section.
The Doxygen references were created using Doxygen version 1.9.2. To generate the Doxygen pages, use the provided Python script at tools/doxygen/generate_docs.py. Please ensure that each of the library submodules under libraries/standard/
and libraries/aws/
are cloned before using this script.
cd <CSDK_ROOT>
git submodule update --init --recursive --checkout
python3 tools/doxygen/generate_docs.py
The generated documentation landing page is located at docs/doxygen/output/html/index.html
.
Author: aws
Source code: https://github.com/aws/aws-iot-device-sdk-embedded-C
License: MIT license
1647064260
Run C# scripts from the .NET CLI, define NuGet packages inline and edit/debug them in VS Code - all of that with full language services support from OmniSharp.
Name | Version | Framework(s) |
---|---|---|
dotnet-script (global tool) | net6.0 , net5.0 , netcoreapp3.1 | |
Dotnet.Script (CLI as Nuget) | net6.0 , net5.0 , netcoreapp3.1 | |
Dotnet.Script.Core | netcoreapp3.1 , netstandard2.0 | |
Dotnet.Script.DependencyModel | netstandard2.0 | |
Dotnet.Script.DependencyModel.Nuget | netstandard2.0 |
The only thing we need to install is .NET Core 3.1 or .NET 5.0 SDK.
.NET Core 2.1 introduced the concept of global tools meaning that you can install dotnet-script
using nothing but the .NET CLI.
dotnet tool install -g dotnet-script
You can invoke the tool using the following command: dotnet-script
Tool 'dotnet-script' (version '0.22.0') was successfully installed.
The advantage of this approach is that you can use the same command for installation across all platforms. .NET Core SDK also supports viewing a list of installed tools and their uninstallation.
dotnet tool list -g
Package Id Version Commands
---------------------------------------------
dotnet-script 0.22.0 dotnet-script
dotnet tool uninstall dotnet-script -g
Tool 'dotnet-script' (version '0.22.0') was successfully uninstalled.
choco install dotnet.script
We also provide a PowerShell script for installation.
(new-object Net.WebClient).DownloadString("https://raw.githubusercontent.com/filipw/dotnet-script/master/install/install.ps1") | iex
curl -s https://raw.githubusercontent.com/filipw/dotnet-script/master/install/install.sh | bash
If permission is denied we can try with sudo
curl -s https://raw.githubusercontent.com/filipw/dotnet-script/master/install/install.sh | sudo bash
A Dockerfile for running dotnet-script in a Linux container is available. Build:
cd build
docker build -t dotnet-script -f Dockerfile ..
And run:
docker run -it dotnet-script --version
You can manually download all the releases in zip
format from the GitHub releases page.
Our typical helloworld.csx
might look like this:
Console.WriteLine("Hello world!");
That is all it takes and we can execute the script. Args are accessible via the global Args array.
dotnet script helloworld.csx
Simply create a folder somewhere on your system and issue the following command.
dotnet script init
This will create main.csx
along with the launch configuration needed to debug the script in VS Code.
.
├── .vscode
│ └── launch.json
├── main.csx
└── omnisharp.json
We can also initialize a folder using a custom filename.
dotnet script init custom.csx
Instead of main.csx
which is the default, we now have a file named custom.csx
.
.
├── .vscode
│ └── launch.json
├── custom.csx
└── omnisharp.json
Note: Executing
dotnet script init
inside a folder that already contains one or more script files will not create themain.csx
file.
Scripts can be executed directly from the shell as if they were executables.
foo.csx arg1 arg2 arg3
OSX/Linux
Just like all scripts, on OSX/Linux you need to have a
#!
and mark the file as executable via chmod +x foo.csx. If you use dotnet script init to create your csx it will automatically have the#!
directive and be marked as executable.
The OSX/Linux shebang directive should be #!/usr/bin/env dotnet-script
#!/usr/bin/env dotnet-script
Console.WriteLine("Hello world");
You can execute your script using dotnet script or dotnet-script, which allows you to pass arguments to control your script execution more.
foo.csx arg1 arg2 arg3
dotnet script foo.csx -- arg1 arg2 arg3
dotnet-script foo.csx -- arg1 arg2 arg3
All arguments after --
are passed to the script in the following way:
dotnet script foo.csx -- arg1 arg2 arg3
Then you can access the arguments in the script context using the global Args
collection:
foreach (var arg in Args)
{
Console.WriteLine(arg);
}
All arguments before --
are processed by dotnet script
. For example, the following command-line
dotnet script -d foo.csx -- -d
will pass the -d
before --
to dotnet script
and enable the debug mode whereas the -d
after --
is passed to script for its own interpretation of the argument.
dotnet script
has built-in support for referencing NuGet packages directly from within the script.
#r "nuget: AutoMapper, 6.1.0"
Note: Omnisharp needs to be restarted after adding a new package reference
We can define package sources using a NuGet.Config
file in the script root folder. In addition to being used during execution of the script, it will also be used by OmniSharp
that provides language services for packages resolved from these package sources.
As an alternative to maintaining a local NuGet.Config
file we can define these package sources globally either at the user level or at the computer level as described in Configuring NuGet Behaviour
It is also possible to specify packages sources when executing the script.
dotnet script foo.csx -s https://SomePackageSource
Multiple packages sources can be specified like this:
dotnet script foo.csx -s https://SomePackageSource -s https://AnotherPackageSource
Dotnet-Script can create a standalone executable or DLL for your script.
Switch | Long switch | description |
---|---|---|
-o | --output | Directory where the published executable should be placed. Defaults to a 'publish' folder in the current directory. |
-n | --name | The name for the generated DLL (executable not supported at this time). Defaults to the name of the script. |
--dll | Publish to a .dll instead of an executable. | |
-c | --configuration | Configuration to use for publishing the script [Release/Debug]. Default is "Debug" |
-d | --debug | Enables debug output. |
-r | --runtime | The runtime used when publishing the self contained executable. Defaults to your current runtime. |
The executable you can run directly independent of dotnet install, while the DLL can be run using the dotnet CLI like this:
dotnet script exec {path_to_dll} -- arg1 arg2
We provide two types of caching, the dependency cache
and the execution cache
which is explained in detail below. In order for any of these caches to be enabled, it is required that all NuGet package references are specified using an exact version number. The reason for this constraint is that we need to make sure that we don't execute a script with a stale dependency graph.
In order to resolve the dependencies for a script, a dotnet restore
is executed under the hood to produce a project.assets.json
file from which we can figure out all the dependencies we need to add to the compilation. This is an out-of-process operation and represents a significant overhead to the script execution. So this cache works by looking at all the dependencies specified in the script(s) either in the form of NuGet package references or assembly file references. If these dependencies matches the dependencies from the last script execution, we skip the restore and read the dependencies from the already generated project.assets.json
file. If any of the dependencies has changed, we must restore again to obtain the new dependency graph.
In order to execute a script it needs to be compiled first and since that is a CPU and time consuming operation, we make sure that we only compile when the source code has changed. This works by creating a SHA256 hash from all the script files involved in the execution. This hash is written to a temporary location along with the DLL that represents the result of the script compilation. When a script is executed the hash is computed and compared with the hash from the previous compilation. If they match there is no need to recompile and we run from the already compiled DLL. If the hashes don't match, the cache is invalidated and we recompile.
You can override this automatic caching by passing --no-cache flag, which will bypass both caches and cause dependency resolution and script compilation to happen every time we execute the script.
The temporary location used for caches is a sub-directory named dotnet-script
under (in order of priority):
DOTNET_SCRIPT_CACHE_LOCATION
, if defined and value is not empty.$XDG_CACHE_HOME
if defined otherwise $HOME/.cache
~/Library/Caches
Path.GetTempPath
for the platform.The days of debugging scripts using Console.WriteLine
are over. One major feature of dotnet script
is the ability to debug scripts directly in VS Code. Just set a breakpoint anywhere in your script file(s) and hit F5(start debugging)
Script packages are a way of organizing reusable scripts into NuGet packages that can be consumed by other scripts. This means that we now can leverage scripting infrastructure without the need for any kind of bootstrapping.
A script package is just a regular NuGet package that contains script files inside the content
or contentFiles
folder.
The following example shows how the scripts are laid out inside the NuGet package according to the standard convention .
└── contentFiles
└── csx
└── netstandard2.0
└── main.csx
This example contains just the main.csx
file in the root folder, but packages may have multiple script files either in the root folder or in subfolders below the root folder.
When loading a script package we will look for an entry point script to be loaded. This entry point script is identified by one of the following.
main.csx
in the root folderIf the entry point script cannot be determined, we will simply load all the scripts files in the package.
The advantage with using an entry point script is that we can control loading other scripts from the package.
To consume a script package all we need to do specify the NuGet package in the #load
directive.
The following example loads the simple-targets package that contains script files to be included in our script.
#load "nuget:simple-targets-csx, 6.0.0"
using static SimpleTargets;
var targets = new TargetDictionary();
targets.Add("default", () => Console.WriteLine("Hello, world!"));
Run(Args, targets);
Note: Debugging also works for script packages so that we can easily step into the scripts that are brought in using the
#load
directive.
Scripts don't actually have to exist locally on the machine. We can also execute scripts that are made available on an http(s)
endpoint.
This means that we can create a Gist on Github and execute it just by providing the URL to the Gist.
This Gist contains a script that prints out "Hello World"
We can execute the script like this
dotnet script https://gist.githubusercontent.com/seesharper/5d6859509ea8364a1fdf66bbf5b7923d/raw/0a32bac2c3ea807f9379a38e251d93e39c8131cb/HelloWorld.csx
That is a pretty long URL, so why don't make it a TinyURL like this:
dotnet script https://tinyurl.com/y8cda9zt
A pretty common scenario is that we have logic that is relative to the script path. We don't want to require the user to be in a certain directory for these paths to resolve correctly so here is how to provide the script path and the script folder regardless of the current working directory.
public static string GetScriptPath([CallerFilePath] string path = null) => path;
public static string GetScriptFolder([CallerFilePath] string path = null) => Path.GetDirectoryName(path);
Tip: Put these methods as top level methods in a separate script file and
#load
that file wherever access to the script path and/or folder is needed.
This release contains a C# REPL (Read-Evaluate-Print-Loop). The REPL mode ("interactive mode") is started by executing dotnet-script
without any arguments.
The interactive mode allows you to supply individual C# code blocks and have them executed as soon as you press Enter. The REPL is configured with the same default set of assembly references and using statements as regular CSX script execution.
Once dotnet-script
starts you will see a prompt for input. You can start typing C# code there.
~$ dotnet script
> var x = 1;
> x+x
2
If you submit an unterminated expression into the REPL (no ;
at the end), it will be evaluated and the result will be serialized using a formatter and printed in the output. This is a bit more interesting than just calling ToString()
on the object, because it attempts to capture the actual structure of the object. For example:
~$ dotnet script
> var x = new List<string>();
> x.Add("foo");
> x
List<string>(1) { "foo" }
> x.Add("bar");
> x
List<string>(2) { "foo", "bar" }
>
REPL also supports inline Nuget packages - meaning the Nuget packages can be installed into the REPL from within the REPL. This is done via our #r
and #load
from Nuget support and uses identical syntax.
~$ dotnet script
> #r "nuget: Automapper, 6.1.1"
> using AutoMapper;
> typeof(MapperConfiguration)
[AutoMapper.MapperConfiguration]
> #load "nuget: simple-targets-csx, 6.0.0";
> using static SimpleTargets;
> typeof(TargetDictionary)
[Submission#0+SimpleTargets+TargetDictionary]
Using Roslyn syntax parsing, we also support multiline REPL mode. This means that if you have an uncompleted code block and press Enter, we will automatically enter the multiline mode. The mode is indicated by the *
character. This is particularly useful for declaring classes and other more complex constructs.
~$ dotnet script
> class Foo {
* public string Bar {get; set;}
* }
> var foo = new Foo();
Aside from the regular C# script code, you can invoke the following commands (directives) from within the REPL:
Command | Description |
---|---|
#load | Load a script into the REPL (same as #load usage in CSX) |
#r | Load an assembly into the REPL (same as #r usage in CSX) |
#reset | Reset the REPL back to initial state (without restarting it) |
#cls | Clear the console screen without resetting the REPL state |
#exit | Exits the REPL |
You can execute a CSX script and, at the end of it, drop yourself into the context of the REPL. This way, the REPL becomes "seeded" with your code - all the classes, methods or variables are available in the REPL context. This is achieved by running a script with an -i
flag.
For example, given the following CSX script:
var msg = "Hello World";
Console.WriteLine(msg);
When you run this with the -i
flag, Hello World
is printed, REPL starts and msg
variable is available in the REPL context.
~$ dotnet script foo.csx -i
Hello World
>
You can also seed the REPL from inside the REPL - at any point - by invoking a #load
directive pointed at a specific file. For example:
~$ dotnet script
> #load "foo.csx"
Hello World
>
The following example shows how we can pipe data in and out of a script.
The UpperCase.csx
script simply converts the standard input to upper case and writes it back out to standard output.
using (var streamReader = new StreamReader(Console.OpenStandardInput()))
{
Write(streamReader.ReadToEnd().ToUpper());
}
We can now simply pipe the output from one command into our script like this.
echo "This is some text" | dotnet script UpperCase.csx
THIS IS SOME TEXT
The first thing we need to do add the following to the launch.config
file that allows VS Code to debug a running process.
{
"name": ".NET Core Attach",
"type": "coreclr",
"request": "attach",
"processId": "${command:pickProcess}"
}
To debug this script we need a way to attach the debugger in VS Code and the simplest thing we can do here is to wait for the debugger to attach by adding this method somewhere.
public static void WaitForDebugger()
{
Console.WriteLine("Attach Debugger (VS Code)");
while(!Debugger.IsAttached)
{
}
}
To debug the script when executing it from the command line we can do something like
WaitForDebugger();
using (var streamReader = new StreamReader(Console.OpenStandardInput()))
{
Write(streamReader.ReadToEnd().ToUpper()); // <- SET BREAKPOINT HERE
}
Now when we run the script from the command line we will get
$ echo "This is some text" | dotnet script UpperCase.csx
Attach Debugger (VS Code)
This now gives us a chance to attach the debugger before stepping into the script and from VS Code, select the .NET Core Attach
debugger and pick the process that represents the executing script.
Once that is done we should see our breakpoint being hit.
By default, scripts will be compiled using the debug
configuration. This is to ensure that we can debug a script in VS Code as well as attaching a debugger for long running scripts.
There are however situations where we might need to execute a script that is compiled with the release
configuration. For instance, running benchmarks using BenchmarkDotNet is not possible unless the script is compiled with the release
configuration.
We can specify this when executing the script.
dotnet script foo.csx -c release
Starting from version 0.50.0, dotnet-script
supports .Net Core 3.0 and all the C# 8 features. The way we deal with nullable references types in dotnet-script
is that we turn every warning related to nullable reference types into compiler errors. This means every warning between CS8600
and CS8655
are treated as an error when compiling the script.
Nullable references types are turned off by default and the way we enable it is using the #nullable enable
compiler directive. This means that existing scripts will continue to work, but we can now opt-in on this new feature.
#!/usr/bin/env dotnet-script
#nullable enable
string name = null;
Trying to execute the script will result in the following error
main.csx(5,15): error CS8625: Cannot convert null literal to non-nullable reference type.
We will also see this when working with scripts in VS Code under the problems panel.
Download Details:
Author: filipw
Source Code: https://github.com/filipw/dotnet-script
License: MIT License
1602560783
In this article, we’ll discuss how to use jQuery Ajax for ASP.NET Core MVC CRUD Operations using Bootstrap Modal. With jQuery Ajax, we can make HTTP request to controller action methods without reloading the entire page, like a single page application.
To demonstrate CRUD operations – insert, update, delete and retrieve, the project will be dealing with details of a normal bank transaction. GitHub repository for this demo project : https://bit.ly/33KTJAu.
Sub-topics discussed :
In Visual Studio 2019, Go to File > New > Project (Ctrl + Shift + N).
From new project window, Select Asp.Net Core Web Application_._
Once you provide the project name and location. Select Web Application(Model-View-Controller) and uncheck HTTPS Configuration. Above steps will create a brand new ASP.NET Core MVC project.
Let’s create a database for this application using Entity Framework Core. For that we’ve to install corresponding NuGet Packages. Right click on project from solution explorer, select Manage NuGet Packages_,_ From browse tab, install following 3 packages.
Now let’s define DB model class file – /Models/TransactionModel.cs.
public class TransactionModel
{
[Key]
public int TransactionId { get; set; }
[Column(TypeName ="nvarchar(12)")]
[DisplayName("Account Number")]
[Required(ErrorMessage ="This Field is required.")]
[MaxLength(12,ErrorMessage ="Maximum 12 characters only")]
public string AccountNumber { get; set; }
[Column(TypeName ="nvarchar(100)")]
[DisplayName("Beneficiary Name")]
[Required(ErrorMessage = "This Field is required.")]
public string BeneficiaryName { get; set; }
[Column(TypeName ="nvarchar(100)")]
[DisplayName("Bank Name")]
[Required(ErrorMessage = "This Field is required.")]
public string BankName { get; set; }
[Column(TypeName ="nvarchar(11)")]
[DisplayName("SWIFT Code")]
[Required(ErrorMessage = "This Field is required.")]
[MaxLength(11)]
public string SWIFTCode { get; set; }
[DisplayName("Amount")]
[Required(ErrorMessage = "This Field is required.")]
public int Amount { get; set; }
[DisplayFormat(DataFormatString = "{0:MM/dd/yyyy}")]
public DateTime Date { get; set; }
}
C#Copy
Here we’ve defined model properties for the transaction with proper validation. Now let’s define DbContextclass for EF Core.
#asp.net core article #asp.net core #add loading spinner in asp.net core #asp.net core crud without reloading #asp.net core jquery ajax form #asp.net core modal dialog #asp.net core mvc crud using jquery ajax #asp.net core mvc with jquery and ajax #asp.net core popup window #bootstrap modal popup in asp.net core mvc. bootstrap modal popup in asp.net core #delete and viewall in asp.net core #jquery ajax - insert #jquery ajax form post #modal popup dialog in asp.net core #no direct access action method #update #validation in modal popup
1661577180
The following is a collection of tips I find to be useful when working with the Swift language. More content is available on my Twitter account!
Property Wrappers allow developers to wrap properties with specific behaviors, that will be seamlessly triggered whenever the properties are accessed.
While their primary use case is to implement business logic within our apps, it's also possible to use Property Wrappers as debugging tools!
For example, we could build a wrapper called @History
, that would be added to a property while debugging and would keep track of all the values set to this property.
import Foundation
@propertyWrapper
struct History<Value> {
private var value: Value
private(set) var history: [Value] = []
init(wrappedValue: Value) {
self.value = wrappedValue
}
var wrappedValue: Value {
get { value }
set {
history.append(value)
value = newValue
}
}
var projectedValue: Self {
return self
}
}
// We can then decorate our business code
// with the `@History` wrapper
struct User {
@History var name: String = ""
}
var user = User()
// All the existing call sites will still
// compile, without the need for any change
user.name = "John"
user.name = "Jane"
// But now we can also access an history of
// all the previous values!
user.$name.history // ["", "John"]
String
interpolationSwift 5 gave us the possibility to define our own custom String
interpolation methods.
This feature can be used to power many use cases, but there is one that is guaranteed to make sense in most projects: localizing user-facing strings.
import Foundation
extension String.StringInterpolation {
mutating func appendInterpolation(localized key: String, _ args: CVarArg...) {
let localized = String(format: NSLocalizedString(key, comment: ""), arguments: args)
appendLiteral(localized)
}
}
/*
Let's assume that this is the content of our Localizable.strings:
"welcome.screen.greetings" = "Hello %@!";
*/
let userName = "John"
print("\(localized: "welcome.screen.greetings", userName)") // Hello John!
structs
If you’ve always wanted to use some kind of inheritance mechanism for your structs, Swift 5.1 is going to make you very happy!
Using the new KeyPath-based dynamic member lookup, you can implement some pseudo-inheritance, where a type inherits the API of another one 🎉
(However, be careful, I’m definitely not advocating inheritance as a go-to solution 🙃)
import Foundation
protocol Inherits {
associatedtype SuperType
var `super`: SuperType { get }
}
extension Inherits {
subscript<T>(dynamicMember keyPath: KeyPath<SuperType, T>) -> T {
return self.`super`[keyPath: keyPath]
}
}
struct Person {
let name: String
}
@dynamicMemberLookup
struct User: Inherits {
let `super`: Person
let login: String
let password: String
}
let user = User(super: Person(name: "John Appleseed"), login: "Johnny", password: "1234")
user.name // "John Appleseed"
user.login // "Johnny"
NSAttributedString
through a Function BuilderSwift 5.1 introduced Function Builders: a great tool for building custom DSL syntaxes, like SwiftUI. However, one doesn't need to be building a full-fledged DSL in order to leverage them.
For example, it's possible to write a simple Function Builder, whose job will be to compose together individual instances of NSAttributedString
through a nicer syntax than the standard API.
import UIKit
@_functionBuilder
class NSAttributedStringBuilder {
static func buildBlock(_ components: NSAttributedString...) -> NSAttributedString {
let result = NSMutableAttributedString(string: "")
return components.reduce(into: result) { (result, current) in result.append(current) }
}
}
extension NSAttributedString {
class func composing(@NSAttributedStringBuilder _ parts: () -> NSAttributedString) -> NSAttributedString {
return parts()
}
}
let result = NSAttributedString.composing {
NSAttributedString(string: "Hello",
attributes: [.font: UIFont.systemFont(ofSize: 24),
.foregroundColor: UIColor.red])
NSAttributedString(string: " world!",
attributes: [.font: UIFont.systemFont(ofSize: 20),
.foregroundColor: UIColor.orange])
}
switch
and if
as expressionsContrary to other languages, like Kotlin, Swift does not allow switch
and if
to be used as expressions. Meaning that the following code is not valid Swift:
let constant = if condition {
someValue
} else {
someOtherValue
}
A common solution to this problem is to wrap the if
or switch
statement within a closure, that will then be immediately called. While this approach does manage to achieve the desired goal, it makes for a rather poor syntax.
To avoid the ugly trailing ()
and improve on the readability, you can define a resultOf
function, that will serve the exact same purpose, in a more elegant way.
import Foundation
func resultOf<T>(_ code: () -> T) -> T {
return code()
}
let randomInt = Int.random(in: 0...3)
let spelledOut: String = resultOf {
switch randomInt {
case 0:
return "Zero"
case 1:
return "One"
case 2:
return "Two"
case 3:
return "Three"
default:
return "Out of range"
}
}
print(spelledOut)
guard
statementsA guard
statement is a very convenient way for the developer to assert that a condition is met, in order for the execution of the program to keep going.
However, since the body of a guard
statement is meant to be executed when the condition evaluates to false
, the use of the negation (!
) operator within the condition of a guard
statement can make the code hard to read, as it becomes a double negative.
A nice trick to avoid such double negatives is to encapsulate the use of the !
operator within a new property or function, whose name does not include a negative.
import Foundation
extension Collection {
var hasElements: Bool {
return !isEmpty
}
}
let array = Bool.random() ? [1, 2, 3] : []
guard array.hasElements else { fatalError("array was empty") }
print(array)
init
without loosing the compiler-generated oneIt's common knowledge for Swift developers that, when you define a struct
, the compiler is going to automatically generate a memberwise init
for you. That is, unless you also define an init
of your own. Because then, the compiler won't generate any memberwise init
.
Yet, there are many instances where we might enjoy the opportunity to get both. As it turns out, this goal is quite easy to achieve: you just need to define your own init
in an extension
rather than inside the type definition itself.
import Foundation
struct Point {
let x: Int
let y: Int
}
extension Point {
init() {
x = 0
y = 0
}
}
let usingDefaultInit = Point(x: 4, y: 3)
let usingCustomInit = Point()
enum
Swift does not really have an out-of-the-box support of namespaces. One could argue that a Swift module can be seen as a namespace, but creating a dedicated Framework for this sole purpose can legitimately be regarded as overkill.
Some developers have taken the habit to use a struct
which only contains static
fields to implement a namespace. While this does the job, it requires us to remember to implement an empty private
init()
, because it wouldn't make sense for such a struct
to be instantiated.
It's actually possible to take this approach one step further, by replacing the struct
with an enum
. While it might seem weird to have an enum
with no case
, it's actually a very idiomatic way to declare a type that cannot be instantiated.
import Foundation
enum NumberFormatterProvider {
static var currencyFormatter: NumberFormatter {
let formatter = NumberFormatter()
formatter.numberStyle = .currency
formatter.roundingIncrement = 0.01
return formatter
}
static var decimalFormatter: NumberFormatter {
let formatter = NumberFormatter()
formatter.numberStyle = .decimal
formatter.decimalSeparator = ","
return formatter
}
}
NumberFormatterProvider() // ❌ impossible to instantiate by mistake
NumberFormatterProvider.currencyFormatter.string(from: 2.456) // $2.46
NumberFormatterProvider.decimalFormatter.string(from: 2.456) // 2,456
Never
to represent impossible code pathsNever
is quite a peculiar type in the Swift Standard Library: it is defined as an empty enum enum Never { }
.
While this might seem odd at first glance, it actually yields a very interesting property: it makes it a type that cannot be constructed (i.e. it possesses no instances).
This way, Never
can be used as a generic parameter to let the compiler know that a particular feature will not be used.
import Foundation
enum Result<Value, Error> {
case success(value: Value)
case failure(error: Error)
}
func willAlwaysSucceed(_ completion: @escaping ((Result<String, Never>) -> Void)) {
completion(.success(value: "Call was successful"))
}
willAlwaysSucceed( { result in
switch result {
case .success(let value):
print(value)
// the compiler knows that the `failure` case cannot happen
// so it doesn't require us to handle it.
}
})
Decodable
enum
Swift's Codable
framework does a great job at seamlessly decoding entities from a JSON stream. However, when we integrate web-services, we are sometimes left to deal with JSONs that require behaviors that Codable
does not provide out-of-the-box.
For instance, we might have a string-based or integer-based enum
, and be required to set it to a default value when the data found in the JSON does not match any of its cases.
We might be tempted to implement this via an extensive switch
statement over all the possible cases, but there is a much shorter alternative through the initializer init?(rawValue:)
:
import Foundation
enum State: String, Decodable {
case active
case inactive
case undefined
init(from decoder: Decoder) throws {
let container = try decoder.singleValueContainer()
let decodedString = try container.decode(String.self)
self = State(rawValue: decodedString) ?? .undefined
}
}
let data = """
["active", "inactive", "foo"]
""".data(using: .utf8)!
let decoded = try! JSONDecoder().decode([State].self, from: data)
print(decoded) // [State.active, State.inactive, State.undefined]
Dependency injection boils down to a simple idea: when an object requires a dependency, it shouldn't create it by itself, but instead it should be given a function that does it for him.
Now the great thing with Swift is that, not only can a function take another function as a parameter, but that parameter can also be given a default value.
When you combine both those features, you can end up with a dependency injection pattern that is both lightweight on boilerplate, but also type safe.
import Foundation
protocol Service {
func call() -> String
}
class ProductionService: Service {
func call() -> String {
return "This is the production"
}
}
class MockService: Service {
func call() -> String {
return "This is a mock"
}
}
typealias Provider<T> = () -> T
class Controller {
let service: Service
init(serviceProvider: Provider<Service> = { return ProductionService() }) {
self.service = serviceProvider()
}
func work() {
print(service.call())
}
}
let productionController = Controller()
productionController.work() // prints "This is the production"
let mockedController = Controller(serviceProvider: { return MockService() })
mockedController.work() // prints "This is a mock"
Singletons are pretty bad. They make your architecture rigid and tightly coupled, which then results in your code being hard to test and refactor. Instead of using singletons, your code should rely on dependency injection, which is a much more architecturally sound approach.
But singletons are so easy to use, and dependency injection requires us to do extra-work. So maybe, for simple situations, we could find an in-between solution?
One possible solution is to rely on one of Swift's most know features: protocol-oriented programming. Using a protocol
, we declare and access our dependency. We then store it in a private singleton, and perform the injection through an extension of said protocol
.
This way, our code will indeed be decoupled from its dependency, while at the same time keeping the boilerplate to a minimum.
import Foundation
protocol Formatting {
var formatter: NumberFormatter { get }
}
private let sharedFormatter: NumberFormatter = {
let sharedFormatter = NumberFormatter()
sharedFormatter.numberStyle = .currency
return sharedFormatter
}()
extension Formatting {
var formatter: NumberFormatter { return sharedFormatter }
}
class ViewModel: Formatting {
var displayableAmount: String?
func updateDisplay(to amount: Double) {
displayableAmount = formatter.string(for: amount)
}
}
let viewModel = ViewModel()
viewModel.updateDisplay(to: 42000.45)
viewModel.displayableAmount // "$42,000.45"
[weak self]
and guard
Callbacks are a part of almost all iOS apps, and as frameworks such as RxSwift
keep gaining in popularity, they become ever more present in our codebase.
Seasoned Swift developers are aware of the potential memory leaks that @escaping
callbacks can produce, so they make real sure to always use [weak self]
, whenever they need to use self
inside such a context. And when they need to have self
be non-optional, they then add a guard
statement along.
Consequently, this syntax of a [weak self]
followed by a guard
rapidly tends to appear everywhere in the codebase. The good thing is that, through a little protocol-oriented trick, it's actually possible to get rid of this tedious syntax, without loosing any of its benefits!
import Foundation
import PlaygroundSupport
PlaygroundPage.current.needsIndefiniteExecution = true
protocol Weakifiable: class { }
extension Weakifiable {
func weakify(_ code: @escaping (Self) -> Void) -> () -> Void {
return { [weak self] in
guard let self = self else { return }
code(self)
}
}
func weakify<T>(_ code: @escaping (T, Self) -> Void) -> (T) -> Void {
return { [weak self] arg in
guard let self = self else { return }
code(arg, self)
}
}
}
extension NSObject: Weakifiable { }
class Producer: NSObject {
deinit {
print("deinit Producer")
}
private var handler: (Int) -> Void = { _ in }
func register(handler: @escaping (Int) -> Void) {
self.handler = handler
DispatchQueue.main.asyncAfter(deadline: .now() + 1.0, execute: { self.handler(42) })
}
}
class Consumer: NSObject {
deinit {
print("deinit Consumer")
}
let producer = Producer()
func consume() {
producer.register(handler: weakify { result, strongSelf in
strongSelf.handle(result)
})
}
private func handle(_ result: Int) {
print("🎉 \(result)")
}
}
var consumer: Consumer? = Consumer()
consumer?.consume()
DispatchQueue.main.asyncAfter(deadline: .now() + 2.0, execute: { consumer = nil })
// This code prints:
// 🎉 42
// deinit Consumer
// deinit Producer
Asynchronous functions are a big part of iOS APIs, and most developers are familiar with the challenge they pose when one needs to sequentially call several asynchronous APIs.
This often results in callbacks being nested into one another, a predicament often referred to as callback hell.
Many third-party frameworks are able to tackle this issue, for instance RxSwift or PromiseKit. Yet, for simple instances of the problem, there is no need to use such big guns, as it can actually be solved with simple function composition.
import Foundation
typealias CompletionHandler<Result> = (Result?, Error?) -> Void
infix operator ~>: MultiplicationPrecedence
func ~> <T, U>(_ first: @escaping (CompletionHandler<T>) -> Void, _ second: @escaping (T, CompletionHandler<U>) -> Void) -> (CompletionHandler<U>) -> Void {
return { completion in
first({ firstResult, error in
guard let firstResult = firstResult else { completion(nil, error); return }
second(firstResult, { (secondResult, error) in
completion(secondResult, error)
})
})
}
}
func ~> <T, U>(_ first: @escaping (CompletionHandler<T>) -> Void, _ transform: @escaping (T) -> U) -> (CompletionHandler<U>) -> Void {
return { completion in
first({ result, error in
guard let result = result else { completion(nil, error); return }
completion(transform(result), nil)
})
}
}
func service1(_ completionHandler: CompletionHandler<Int>) {
completionHandler(42, nil)
}
func service2(arg: String, _ completionHandler: CompletionHandler<String>) {
completionHandler("🎉 \(arg)", nil)
}
let chainedServices = service1
~> { int in return String(int / 2) }
~> service2
chainedServices({ result, _ in
guard let result = result else { return }
print(result) // Prints: 🎉 21
})
Asynchronous functions are a great way to deal with future events without blocking a thread. Yet, there are times where we would like them to behave in exactly such a blocking way.
Think about writing unit tests and using mocked network calls. You will need to add complexity to your test in order to deal with asynchronous functions, whereas synchronous ones would be much easier to manage.
Thanks to Swift proficiency in the functional paradigm, it is possible to write a function whose job is to take an asynchronous function and transform it into a synchronous one.
import Foundation
func makeSynchrone<A, B>(_ asyncFunction: @escaping (A, (B) -> Void) -> Void) -> (A) -> B {
return { arg in
let lock = NSRecursiveLock()
var result: B? = nil
asyncFunction(arg) {
result = $0
lock.unlock()
}
lock.lock()
return result!
}
}
func myAsyncFunction(arg: Int, completionHandler: (String) -> Void) {
completionHandler("🎉 \(arg)")
}
let syncFunction = makeSynchrone(myAsyncFunction)
print(syncFunction(42)) // prints 🎉 42
Closures are a great way to interact with generic APIs, for instance APIs that allow to manipulate data structures through the use of generic functions, such as filter()
or sorted()
.
The annoying part is that closures tend to clutter your code with many instances of {
, }
and $0
, which can quickly undermine its readably.
A nice alternative for a cleaner syntax is to use a KeyPath
instead of a closure, along with an operator that will deal with transforming the provided KeyPath
in a closure.
import Foundation
prefix operator ^
prefix func ^ <Element, Attribute>(_ keyPath: KeyPath<Element, Attribute>) -> (Element) -> Attribute {
return { element in element[keyPath: keyPath] }
}
struct MyData {
let int: Int
let string: String
}
let data = [MyData(int: 2, string: "Foo"), MyData(int: 4, string: "Bar")]
data.map(^\.int) // [2, 4]
data.map(^\.string) // ["Foo", "Bar"]
userInfo
Dictionary
Many iOS APIs still rely on a userInfo
Dictionary
to handle use-case specific data. This Dictionary
usually stores untyped values, and is declared as follows: [String: Any]
(or sometimes [AnyHashable: Any]
.
Retrieving data from such a structure will involve some conditional casting (via the as?
operator), which is prone to both errors and repetitions. Yet, by introducing a custom subscript
, it's possible to encapsulate all the tedious logic, and end-up with an easier and more robust API.
import Foundation
typealias TypedUserInfoKey<T> = (key: String, type: T.Type)
extension Dictionary where Key == String, Value == Any {
subscript<T>(_ typedKey: TypedUserInfoKey<T>) -> T? {
return self[typedKey.key] as? T
}
}
let userInfo: [String : Any] = ["Foo": 4, "Bar": "forty-two"]
let integerTypedKey = TypedUserInfoKey(key: "Foo", type: Int.self)
let intValue = userInfo[integerTypedKey] // returns 4
type(of: intValue) // returns Int?
let stringTypedKey = TypedUserInfoKey(key: "Bar", type: String.self)
let stringValue = userInfo[stringTypedKey] // returns "forty-two"
type(of: stringValue) // returns String?
MVVM is a great pattern to separate business logic from presentation logic. The main challenge to make it work, is to define a mechanism for the presentation layer to be notified of model updates.
RxSwift is a perfect choice to solve such a problem. Yet, some developers don't feel confortable with leveraging a third-party library for such a central part of their architecture.
For those situation, it's possible to define a lightweight Variable
type, that will make the MVVM pattern very easy to use!
import Foundation
class Variable<Value> {
var value: Value {
didSet {
onUpdate?(value)
}
}
var onUpdate: ((Value) -> Void)? {
didSet {
onUpdate?(value)
}
}
init(_ value: Value, _ onUpdate: ((Value) -> Void)? = nil) {
self.value = value
self.onUpdate = onUpdate
self.onUpdate?(value)
}
}
let variable: Variable<String?> = Variable(nil)
variable.onUpdate = { data in
if let data = data {
print(data)
}
}
variable.value = "Foo"
variable.value = "Bar"
// prints:
// Foo
// Bar
typealias
to its fullestThe keyword typealias
allows developers to give a new name to an already existing type. For instance, Swift defines Void
as a typealias
of ()
, the empty tuple.
But a less known feature of this mechanism is that it allows to assign concrete types for generic parameters, or to rename them. This can help make the semantics of generic types much clearer, when used in specific use cases.
import Foundation
enum Either<Left, Right> {
case left(Left)
case right(Right)
}
typealias Result<Value> = Either<Value, Error>
typealias IntOrString = Either<Int, String>
forEach
Iterating through objects via the forEach(_:)
method is a great alternative to the classic for
loop, as it allows our code to be completely oblivious of the iteration logic. One limitation, however, is that forEach(_:)
does not allow to stop the iteration midway.
Taking inspiration from the Objective-C implementation, we can write an overload that will allow the developer to stop the iteration, if needed.
import Foundation
extension Sequence {
func forEach(_ body: (Element, _ stop: inout Bool) throws -> Void) rethrows {
var stop = false
for element in self {
try body(element, &stop)
if stop {
return
}
}
}
}
["Foo", "Bar", "FooBar"].forEach { element, stop in
print(element)
stop = (element == "Bar")
}
// Prints:
// Foo
// Bar
reduce()
Functional programing is a great way to simplify a codebase. For instance, reduce
is an alternative to the classic for
loop, without most the boilerplate. Unfortunately, simplicity often comes at the price of performance.
Consider that you want to remove duplicate values from a Sequence
. While reduce()
is a perfectly fine way to express this computation, the performance will be sub optimal, because of all the unnecessary Array
copying that will happen every time its closure gets called.
That's when reduce(into:_:)
comes into play. This version of reduce
leverages the capacities of copy-on-write type (such as Array
or Dictionnary
) in order to avoid unnecessary copying, which results in a great performance boost.
import Foundation
func time(averagedExecutions: Int = 1, _ code: () -> Void) {
let start = Date()
for _ in 0..<averagedExecutions { code() }
let end = Date()
let duration = end.timeIntervalSince(start) / Double(averagedExecutions)
print("time: \(duration)")
}
let data = (1...1_000).map { _ in Int(arc4random_uniform(256)) }
// runs in 0.63s
time {
let noDuplicates: [Int] = data.reduce([], { $0.contains($1) ? $0 : $0 + [$1] })
}
// runs in 0.15s
time {
let noDuplicates: [Int] = data.reduce(into: [], { if !$0.contains($1) { $0.append($1) } } )
}
UI components such as UITableView
and UICollectionView
rely on reuse identifiers in order to efficiently recycle the views they display. Often, those reuse identifiers take the form of a static hardcoded String
, that will be used for every instance of their class.
Through protocol-oriented programing, it's possible to avoid those hardcoded values, and instead use the name of the type as a reuse identifier.
import Foundation
import UIKit
protocol Reusable {
static var reuseIdentifier: String { get }
}
extension Reusable {
static var reuseIdentifier: String {
return String(describing: self)
}
}
extension UITableViewCell: Reusable { }
extension UITableView {
func register<T: UITableViewCell>(_ class: T.Type) {
register(`class`, forCellReuseIdentifier: T.reuseIdentifier)
}
func dequeueReusableCell<T: UITableViewCell>(for indexPath: IndexPath) -> T {
return dequeueReusableCell(withIdentifier: T.reuseIdentifier, for: indexPath) as! T
}
}
class MyCell: UITableViewCell { }
let tableView = UITableView()
tableView.register(MyCell.self)
let myCell: MyCell = tableView.dequeueReusableCell(for: [0, 0])
The C language has a construct called union
, that allows a single variable to hold values from different types. While Swift does not provide such a construct, it provides enums with associated values, which allows us to define a type called Either
that implements a union
of two types.
import Foundation
enum Either<A, B> {
case left(A)
case right(B)
func either(ifLeft: ((A) -> Void)? = nil, ifRight: ((B) -> Void)? = nil) {
switch self {
case let .left(a):
ifLeft?(a)
case let .right(b):
ifRight?(b)
}
}
}
extension Bool { static func random() -> Bool { return arc4random_uniform(2) == 0 } }
var intOrString: Either<Int, String> = Bool.random() ? .left(2) : .right("Foo")
intOrString.either(ifLeft: { print($0 + 1) }, ifRight: { print($0 + "Bar") })
If you're interested by this kind of data structure, I strongly recommend that you learn more about Algebraic Data Types.
Most of the time, when we create a .xib
file, we give it the same name as its associated class. From that, if we later refactor our code and rename such a class, we run the risk of forgetting to rename the associated .xib
.
While the error will often be easy to catch, if the .xib
is used in a remote section of its app, it might go unnoticed for sometime. Fortunately it's possible to build custom test predicates that will assert that 1) for a given class, there exists a .nib
with the same name in a given Bundle
, 2) for all the .nib
in a given Bundle
, there exists a class with the same name.
import XCTest
public func XCTAssertClassHasNib(_ class: AnyClass, bundle: Bundle, file: StaticString = #file, line: UInt = #line) {
let associatedNibURL = bundle.url(forResource: String(describing: `class`), withExtension: "nib")
XCTAssertNotNil(associatedNibURL, "Class \"\(`class`)\" has no associated nib file", file: file, line: line)
}
public func XCTAssertNibHaveClasses(_ bundle: Bundle, file: StaticString = #file, line: UInt = #line) {
guard let bundleName = bundle.infoDictionary?["CFBundleName"] as? String,
let basePath = bundle.resourcePath,
let enumerator = FileManager.default.enumerator(at: URL(fileURLWithPath: basePath),
includingPropertiesForKeys: nil,
options: [.skipsHiddenFiles, .skipsSubdirectoryDescendants]) else { return }
var nibFilesURLs = [URL]()
for case let fileURL as URL in enumerator {
if fileURL.pathExtension.uppercased() == "NIB" {
nibFilesURLs.append(fileURL)
}
}
nibFilesURLs.map { $0.lastPathComponent }
.compactMap { $0.split(separator: ".").first }
.map { String($0) }
.forEach {
let associatedClass: AnyClass? = bundle.classNamed("\(bundleName).\($0)")
XCTAssertNotNil(associatedClass, "File \"\($0).nib\" has no associated class", file: file, line: line)
}
}
XCTAssertClassHasNib(MyFirstTableViewCell.self, bundle: Bundle(for: AppDelegate.self))
XCTAssertClassHasNib(MySecondTableViewCell.self, bundle: Bundle(for: AppDelegate.self))
XCTAssertNibHaveClasses(Bundle(for: AppDelegate.self))
Many thanks Benjamin Lavialle for coming up with the idea behind the second test predicate.
Seasoned Swift developers know it: a protocol with associated type (PAT) "can only be used as a generic constraint because it has Self or associated type requirements". When we really need to use a PAT to type a variable, the goto workaround is to use a type-erased wrapper.
While this solution works perfectly, it requires a fair amount of boilerplate code. In instances where we are only interested in exposing one particular function of the PAT, a shorter approach using function types is possible.
import Foundation
import UIKit
protocol Configurable {
associatedtype Model
func configure(with model: Model)
}
typealias Configurator<Model> = (Model) -> ()
extension UILabel: Configurable {
func configure(with model: String) {
self.text = model
}
}
let label = UILabel()
let configurator: Configurator<String> = label.configure
configurator("Foo")
label.text // "Foo"
UIKit
exposes a very powerful and simple API to perform view animations. However, this API can become a little bit quirky to use when we want to perform animations sequentially, because it involves nesting closure within one another, which produces notoriously hard to maintain code.
Nonetheless, it's possible to define a rather simple class, that will expose a really nicer API for this particular use case 👌
import Foundation
import UIKit
class AnimationSequence {
typealias Animations = () -> Void
private let current: Animations
private let duration: TimeInterval
private var next: AnimationSequence? = nil
init(animations: @escaping Animations, duration: TimeInterval) {
self.current = animations
self.duration = duration
}
@discardableResult func append(animations: @escaping Animations, duration: TimeInterval) -> AnimationSequence {
var lastAnimation = self
while let nextAnimation = lastAnimation.next {
lastAnimation = nextAnimation
}
lastAnimation.next = AnimationSequence(animations: animations, duration: duration)
return self
}
func run() {
UIView.animate(withDuration: duration, animations: current, completion: { finished in
if finished, let next = self.next {
next.run()
}
})
}
}
var firstView = UIView()
var secondView = UIView()
firstView.alpha = 0
secondView.alpha = 0
AnimationSequence(animations: { firstView.alpha = 1.0 }, duration: 1)
.append(animations: { secondView.alpha = 1.0 }, duration: 0.5)
.append(animations: { firstView.alpha = 0.0 }, duration: 2.0)
.run()
Debouncing is a very useful tool when dealing with UI inputs. Consider a search bar, whose content is used to query an API. It wouldn't make sense to perform a request for every character the user is typing, because as soon as a new character is entered, the result of the previous request has become irrelevant.
Instead, our code will perform much better if we "debounce" the API call, meaning that we will wait until some delay has passed, without the input being modified, before actually performing the call.
import Foundation
func debounced(delay: TimeInterval, queue: DispatchQueue = .main, action: @escaping (() -> Void)) -> () -> Void {
var workItem: DispatchWorkItem?
return {
workItem?.cancel()
workItem = DispatchWorkItem(block: action)
queue.asyncAfter(deadline: .now() + delay, execute: workItem!)
}
}
let debouncedPrint = debounced(delay: 1.0) { print("Action performed!") }
debouncedPrint()
debouncedPrint()
debouncedPrint()
// After a 1 second delay, this gets
// printed only once to the console:
// Action performed!
Optional
booleansWhen we need to apply the standard boolean operators to Optional
booleans, we often end up with a syntax unnecessarily crowded with unwrapping operations. By taking a cue from the world of three-valued logics, we can define a couple operators that make working with Bool?
values much nicer.
import Foundation
func && (lhs: Bool?, rhs: Bool?) -> Bool? {
switch (lhs, rhs) {
case (false, _), (_, false):
return false
case let (unwrapLhs?, unwrapRhs?):
return unwrapLhs && unwrapRhs
default:
return nil
}
}
func || (lhs: Bool?, rhs: Bool?) -> Bool? {
switch (lhs, rhs) {
case (true, _), (_, true):
return true
case let (unwrapLhs?, unwrapRhs?):
return unwrapLhs || unwrapRhs
default:
return nil
}
}
false && nil // false
true && nil // nil
[true, nil, false].reduce(true, &&) // false
nil || true // true
nil || false // nil
[true, nil, false].reduce(false, ||) // true
Sequence
Transforming a Sequence
in order to remove all the duplicate values it contains is a classic use case. To implement it, one could be tempted to transform the Sequence
into a Set
, then back to an Array
. The downside with this approach is that it will not preserve the order of the sequence, which can definitely be a dealbreaker. Using reduce()
it is possible to provide a concise implementation that preserves ordering:
import Foundation
extension Sequence where Element: Equatable {
func duplicatesRemoved() -> [Element] {
return reduce([], { $0.contains($1) ? $0 : $0 + [$1] })
}
}
let data = [2, 5, 2, 3, 6, 5, 2]
data.duplicatesRemoved() // [2, 5, 3, 6]
Optional strings are very common in Swift code, for instance many objects from UIKit
expose the text they display as a String?
. Many times you will need to manipulate this data as an unwrapped String
, with a default value set to the empty string for nil
cases.
While the nil-coalescing operator (e.g. ??
) is a perfectly fine way to a achieve this goal, defining a computed variable like orEmpty
can help a lot in cleaning the syntax.
import Foundation
import UIKit
extension Optional where Wrapped == String {
var orEmpty: String {
switch self {
case .some(let value):
return value
case .none:
return ""
}
}
}
func doesNotWorkWithOptionalString(_ param: String) {
// do something with `param`
}
let label = UILabel()
label.text = "This is some text."
doesNotWorkWithOptionalString(label.text.orEmpty)
Every seasoned iOS developers knows it: objects from UIKit
can only be accessed from the main thread. Any attempt to access them from a background thread is a guaranteed crash.
Still, running a costly computation on the background, and then using it to update the UI can be a common pattern.
In such cases you can rely on asyncUI
to encapsulate all the boilerplate code.
import Foundation
import UIKit
func asyncUI<T>(_ computation: @autoclosure @escaping () -> T, qos: DispatchQoS.QoSClass = .userInitiated, _ completion: @escaping (T) -> Void) {
DispatchQueue.global(qos: qos).async {
let value = computation()
DispatchQueue.main.async {
completion(value)
}
}
}
let label = UILabel()
func costlyComputation() -> Int { return (0..<10_000).reduce(0, +) }
asyncUI(costlyComputation()) { value in
label.text = "\(value)"
}
A debug view, from which any controller of an app can be instantiated and pushed on the navigation stack, has the potential to bring some real value to a development process. A requirement to build such a view is to have a list of all the classes from a given Bundle
that inherit from UIViewController
. With the following extension
, retrieving this list becomes a piece of cake 🍰
import Foundation
import UIKit
import ObjectiveC
extension Bundle {
func viewControllerTypes() -> [UIViewController.Type] {
guard let bundlePath = self.executablePath else { return [] }
var size: UInt32 = 0
var rawClassNames: UnsafeMutablePointer<UnsafePointer<Int8>>!
var parsedClassNames = [String]()
rawClassNames = objc_copyClassNamesForImage(bundlePath, &size)
for index in 0..<size {
let className = rawClassNames[Int(index)]
if let name = NSString.init(utf8String:className) as String?,
NSClassFromString(name) is UIViewController.Type {
parsedClassNames.append(name)
}
}
return parsedClassNames
.sorted()
.compactMap { NSClassFromString($0) as? UIViewController.Type }
}
}
// Fetch all view controller types in UIKit
Bundle(for: UIViewController.self).viewControllerTypes()
I share the credit for this tip with Benoît Caron.
Update As it turns out, map
is actually a really bad name for this function, because it does not preserve composition of transformations, a property that is required to fit the definition of a real map
function.
Surprisingly enough, the standard library doesn't define a map()
function for dictionaries that allows to map both keys
and values
into a new Dictionary
. Nevertheless, such a function can be helpful, for instance when converting data across different frameworks.
import Foundation
extension Dictionary {
func map<T: Hashable, U>(_ transform: (Key, Value) throws -> (T, U)) rethrows -> [T: U] {
var result: [T: U] = [:]
for (key, value) in self {
let (transformedKey, transformedValue) = try transform(key, value)
result[transformedKey] = transformedValue
}
return result
}
}
let data = [0: 5, 1: 6, 2: 7]
data.map { ("\($0)", $1 * $1) } // ["2": 49, "0": 25, "1": 36]
nil
valuesSwift provides the function compactMap()
, that can be used to remove nil
values from a Sequence
of optionals when calling it with an argument that just returns its parameter (i.e. compactMap { $0 }
). Still, for such use cases it would be nice to get rid of the trailing closure.
The implementation isn't as straightforward as your usual extension
, but once it has been written, the call site definitely gets cleaner 👌
import Foundation
protocol OptionalConvertible {
associatedtype Wrapped
func asOptional() -> Wrapped?
}
extension Optional: OptionalConvertible {
func asOptional() -> Wrapped? {
return self
}
}
extension Sequence where Element: OptionalConvertible {
func compacted() -> [Element.Wrapped] {
return compactMap { $0.asOptional() }
}
}
let data = [nil, 1, 2, nil, 3, 5, nil, 8, nil]
data.compacted() // [1, 2, 3, 5, 8]
It might happen that your code has to deal with values that come with an expiration date. In a game, it could be a score multiplier that will only last for 30 seconds. Or it could be an authentication token for an API, with a 15 minutes lifespan. In both instances you can rely on the type Expirable
to encapsulate the expiration logic.
import Foundation
struct Expirable<T> {
private var innerValue: T
private(set) var expirationDate: Date
var value: T? {
return hasExpired() ? nil : innerValue
}
init(value: T, expirationDate: Date) {
self.innerValue = value
self.expirationDate = expirationDate
}
init(value: T, duration: Double) {
self.innerValue = value
self.expirationDate = Date().addingTimeInterval(duration)
}
func hasExpired() -> Bool {
return expirationDate < Date()
}
}
let expirable = Expirable(value: 42, duration: 3)
sleep(2)
expirable.value // 42
sleep(2)
expirable.value // nil
I share the credit for this tip with Benoît Caron.
map()
Almost all Apple devices able to run Swift code are powered by a multi-core CPU, consequently making a good use of parallelism is a great way to improve code performance. map()
is a perfect candidate for such an optimization, because it is almost trivial to define a parallel implementation.
import Foundation
extension Array {
func parallelMap<T>(_ transform: (Element) -> T) -> [T] {
let res = UnsafeMutablePointer<T>.allocate(capacity: count)
DispatchQueue.concurrentPerform(iterations: count) { i in
res[i] = transform(self[i])
}
let finalResult = Array<T>(UnsafeBufferPointer(start: res, count: count))
res.deallocate(capacity: count)
return finalResult
}
}
let array = (0..<1_000).map { $0 }
func work(_ n: Int) -> Int {
return (0..<n).reduce(0, +)
}
array.parallelMap { work($0) }
🚨 Make sure to only use parallelMap()
when the transform
function actually performs some costly computations. Otherwise performances will be systematically slower than using map()
, because of the multithreading overhead.
During development of a feature that performs some heavy computations, it can be helpful to measure just how much time a chunk of code takes to run. The time()
function is a nice tool for this purpose, because of how simple it is to add and then to remove when it is no longer needed.
import Foundation
func time(averagedExecutions: Int = 1, _ code: () -> Void) {
let start = Date()
for _ in 0..<averagedExecutions { code() }
let end = Date()
let duration = end.timeIntervalSince(start) / Double(averagedExecutions)
print("time: \(duration)")
}
time {
(0...10_000).map { $0 * $0 }
}
// time: 0.183973908424377
Concurrency is definitely one of those topics were the right encapsulation bears the potential to make your life so much easier. For instance, with this piece of code you can easily launch two computations in parallel, and have the results returned in a tuple.
import Foundation
func parallel<T, U>(_ left: @autoclosure () -> T, _ right: @autoclosure () -> U) -> (T, U) {
var leftRes: T?
var rightRes: U?
DispatchQueue.concurrentPerform(iterations: 2, execute: { id in
if id == 0 {
leftRes = left()
} else {
rightRes = right()
}
})
return (leftRes!, rightRes!)
}
let values = (1...100_000).map { $0 }
let results = parallel(values.map { $0 * $0 }, values.reduce(0, +))
Swift exposes three special variables #file
, #line
and #function
, that are respectively set to the name of the current file, line and function. Those variables become very useful when writing custom logging functions or test predicates.
import Foundation
func log(_ message: String, _ file: String = #file, _ line: Int = #line, _ function: String = #function) {
print("[\(file):\(line)] \(function) - \(message)")
}
func foo() {
log("Hello world!")
}
foo() // [MyPlayground.playground:8] foo() - Hello world!
Swift 4.1 has introduced a new feature called Conditional Conformance, which allows a type to implement a protocol only when its generic type also does.
With this addition it becomes easy to let Optional
implement Comparable
only when Wrapped
also implements Comparable
:
import Foundation
extension Optional: Comparable where Wrapped: Comparable {
public static func < (lhs: Optional, rhs: Optional) -> Bool {
switch (lhs, rhs) {
case let (lhs?, rhs?):
return lhs < rhs
case (nil, _?):
return true // anything is greater than nil
case (_?, nil):
return false // nil in smaller than anything
case (nil, nil):
return true // nil is not smaller than itself
}
}
}
let data: [Int?] = [8, 4, 3, nil, 12, 4, 2, nil, -5]
data.sorted() // [nil, nil, Optional(-5), Optional(2), Optional(3), Optional(4), Optional(4), Optional(8), Optional(12)]
Any attempt to access an Array
beyond its bounds will result in a crash. While it's possible to write conditions such as if index < array.count { array[index] }
in order to prevent such crashes, this approach will rapidly become cumbersome.
A great thing is that this condition can be encapsulated in a custom subscript
that will work on any Collection
:
import Foundation
extension Collection {
subscript (safe index: Index) -> Element? {
return indices.contains(index) ? self[index] : nil
}
}
let data = [1, 3, 4]
data[safe: 1] // Optional(3)
data[safe: 10] // nil
Subscripting a string with a range can be very cumbersome in Swift 4. Let's face it, no one wants to write lines like someString[index(startIndex, offsetBy: 0)..<index(startIndex, offsetBy: 10)]
on a regular basis.
Luckily, with the addition of one clever extension, strings can be sliced as easily as arrays 🎉
import Foundation
extension String {
public subscript(value: CountableClosedRange<Int>) -> Substring {
get {
return self[index(startIndex, offsetBy: value.lowerBound)...index(startIndex, offsetBy: value.upperBound)]
}
}
public subscript(value: CountableRange<Int>) -> Substring {
get {
return self[index(startIndex, offsetBy: value.lowerBound)..<index(startIndex, offsetBy: value.upperBound)]
}
}
public subscript(value: PartialRangeUpTo<Int>) -> Substring {
get {
return self[..<index(startIndex, offsetBy: value.upperBound)]
}
}
public subscript(value: PartialRangeThrough<Int>) -> Substring {
get {
return self[...index(startIndex, offsetBy: value.upperBound)]
}
}
public subscript(value: PartialRangeFrom<Int>) -> Substring {
get {
return self[index(startIndex, offsetBy: value.lowerBound)...]
}
}
}
let data = "This is a string!"
data[..<4] // "This"
data[5..<9] // "is a"
data[10...] // "string!"
By using a KeyPath
along with a generic type, a very clean and concise syntax for sorting data can be implemented:
import Foundation
extension Sequence {
func sorted<T: Comparable>(by attribute: KeyPath<Element, T>) -> [Element] {
return sorted(by: { $0[keyPath: attribute] < $1[keyPath: attribute] })
}
}
let data = ["Some", "words", "of", "different", "lengths"]
data.sorted(by: \.count) // ["of", "Some", "words", "lengths", "different"]
If you like this syntax, make sure to checkout KeyPathKit!
By capturing a local variable in a returned closure, it is possible to manufacture cache-efficient versions of pure functions. Be careful though, this trick only works with non-recursive function!
import Foundation
func cached<In: Hashable, Out>(_ f: @escaping (In) -> Out) -> (In) -> Out {
var cache = [In: Out]()
return { (input: In) -> Out in
if let cachedValue = cache[input] {
return cachedValue
} else {
let result = f(input)
cache[input] = result
return result
}
}
}
let cachedCos = cached { (x: Double) in cos(x) }
cachedCos(.pi * 2) // value of cos for 2π is now cached
When distinguishing between complex boolean conditions, using a switch
statement along with pattern matching can be more readable than the classic series of if {} else if {}
.
import Foundation
let expr1: Bool
let expr2: Bool
let expr3: Bool
if expr1 && !expr3 {
functionA()
} else if !expr2 && expr3 {
functionB()
} else if expr1 && !expr2 && expr3 {
functionC()
}
switch (expr1, expr2, expr3) {
case (true, _, false):
functionA()
case (_, false, true):
functionB()
case (true, false, true):
functionC()
default:
break
}
Using map()
on a range makes it easy to generate an array of data.
import Foundation
func randomInt() -> Int { return Int(arc4random()) }
let randomArray = (1...10).map { _ in randomInt() }
Using @autoclosure
enables the compiler to automatically wrap an argument within a closure, thus allowing for a very clean syntax at call sites.
import UIKit
extension UIView {
class func animate(withDuration duration: TimeInterval, _ animations: @escaping @autoclosure () -> Void) {
UIView.animate(withDuration: duration, animations: animations)
}
}
let view = UIView()
UIView.animate(withDuration: 0.3, view.backgroundColor = .orange)
When working with RxSwift, it's very easy to observe both the current and previous value of an observable sequence by simply introducing a shift using skip()
.
import RxSwift
let values = Observable.of(4, 8, 15, 16, 23, 42)
let newAndOld = Observable.zip(values, values.skip(1)) { (previous: $0, current: $1) }
.subscribe(onNext: { pair in
print("current: \(pair.current) - previous: \(pair.previous)")
})
//current: 8 - previous: 4
//current: 15 - previous: 8
//current: 16 - previous: 15
//current: 23 - previous: 16
//current: 42 - previous: 23
Using protocols such as ExpressibleByStringLiteral
it is possible to provide an init
that will be automatically when a literal value is provided, allowing for nice and short syntax. This can be very helpful when writing mock or test data.
import Foundation
extension URL: ExpressibleByStringLiteral {
public init(stringLiteral value: String) {
self.init(string: value)!
}
}
let url: URL = "http://www.google.fr"
NSURLConnection.canHandle(URLRequest(url: "http://www.google.fr"))
Through some clever use of Swift private
visibility it is possible to define a container that holds any untrusted value (such as a user input) from which the only way to retrieve the value is by making it successfully pass a validation test.
import Foundation
struct Untrusted<T> {
private(set) var value: T
}
protocol Validator {
associatedtype T
static func validation(value: T) -> Bool
}
extension Validator {
static func validate(untrusted: Untrusted<T>) -> T? {
if self.validation(value: untrusted.value) {
return untrusted.value
} else {
return nil
}
}
}
struct FrenchPhoneNumberValidator: Validator {
static func validation(value: String) -> Bool {
return (value.count) == 10 && CharacterSet(charactersIn: value).isSubset(of: CharacterSet.decimalDigits)
}
}
let validInput = Untrusted(value: "0122334455")
let invalidInput = Untrusted(value: "0123")
FrenchPhoneNumberValidator.validate(untrusted: validInput) // returns "0122334455"
FrenchPhoneNumberValidator.validate(untrusted: invalidInput) // returns nil
With the addition of keypaths in Swift 4, it is now possible to easily implement the builder pattern, that allows the developer to clearly separate the code that initializes a value from the code that uses it, without the burden of defining a factory method.
import UIKit
protocol With {}
extension With where Self: AnyObject {
@discardableResult
func with<T>(_ property: ReferenceWritableKeyPath<Self, T>, setTo value: T) -> Self {
self[keyPath: property] = value
return self
}
}
extension UIView: With {}
let view = UIView()
let label = UILabel()
.with(\.textColor, setTo: .red)
.with(\.text, setTo: "Foo")
.with(\.textAlignment, setTo: .right)
.with(\.layer.cornerRadius, setTo: 5)
view.addSubview(label)
🚨 The Swift compiler does not perform OS availability checks on properties referenced by keypaths. Any attempt to use a KeyPath
for an unavailable property will result in a runtime crash.
I share the credit for this tip with Marion Curtil.
When a type stores values for the sole purpose of parametrizing its functions, it’s then possible to not store the values but directly the function, with no discernable difference at the call site.
import Foundation
struct MaxValidator {
let max: Int
let strictComparison: Bool
func isValid(_ value: Int) -> Bool {
return self.strictComparison ? value < self.max : value <= self.max
}
}
struct MaxValidator2 {
var isValid: (_ value: Int) -> Bool
init(max: Int, strictComparison: Bool) {
self.isValid = strictComparison ? { $0 < max } : { $0 <= max }
}
}
MaxValidator(max: 5, strictComparison: true).isValid(5) // false
MaxValidator2(max: 5, strictComparison: false).isValid(5) // true
Functions are first-class citizen types in Swift, so it is perfectly legal to define operators for them.
import Foundation
let firstRange = { (0...3).contains($0) }
let secondRange = { (5...6).contains($0) }
func ||(_ lhs: @escaping (Int) -> Bool, _ rhs: @escaping (Int) -> Bool) -> (Int) -> Bool {
return { value in
return lhs(value) || rhs(value)
}
}
(firstRange || secondRange)(2) // true
(firstRange || secondRange)(4) // false
(firstRange || secondRange)(6) // true
Typealiases are great to express function signatures in a more comprehensive manner, which then enables us to easily define functions that operate on them, resulting in a nice way to write and use some powerful API.
import Foundation
typealias RangeSet = (Int) -> Bool
func union(_ left: @escaping RangeSet, _ right: @escaping RangeSet) -> RangeSet {
return { left($0) || right($0) }
}
let firstRange = { (0...3).contains($0) }
let secondRange = { (5...6).contains($0) }
let unionRange = union(firstRange, secondRange)
unionRange(2) // true
unionRange(4) // false
By returning a closure that captures a local variable, it's possible to encapsulate a mutable state within a function.
import Foundation
func counterFactory() -> () -> Int {
var counter = 0
return {
counter += 1
return counter
}
}
let counter = counterFactory()
counter() // returns 1
counter() // returns 2
⚠️ Since Swift 4.2,
allCases
can now be synthesized at compile-time by simply conforming to the protocolCaseIterable
. The implementation below should no longer be used in production code.
Through some clever leveraging of how enums are stored in memory, it is possible to generate an array that contains all the possible cases of an enum. This can prove particularly useful when writing unit tests that consume random data.
import Foundation
enum MyEnum { case first; case second; case third; case fourth }
protocol EnumCollection: Hashable {
static var allCases: [Self] { get }
}
extension EnumCollection {
public static var allCases: [Self] {
var i = 0
return Array(AnyIterator {
let next = withUnsafePointer(to: &i) {
$0.withMemoryRebound(to: Self.self, capacity: 1) { $0.pointee }
}
if next.hashValue != i { return nil }
i += 1
return next
})
}
}
extension MyEnum: EnumCollection { }
MyEnum.allCases // [.first, .second, .third, .fourth]
The if-let syntax is a great way to deal with optional values in a safe manner, but at times it can prove to be just a little bit to cumbersome. In such cases, using the Optional.map()
function is a nice way to achieve a shorter code while retaining safeness and readability.
import UIKit
let date: Date? = Date() // or could be nil, doesn't matter
let formatter = DateFormatter()
let label = UILabel()
if let safeDate = date {
label.text = formatter.string(from: safeDate)
}
label.text = date.map { return formatter.string(from: $0) }
label.text = date.map(formatter.string(from:)) // even shorter, tough less readable
📣 NEW 📣 Swift Tips are now available on YouTube 👇
Summary
String
interpolationstructs
NSAttributedString
through a Function Builderswitch
and if
as expressionsguard
statementsinit
without loosing the compiler-generated oneenum
Never
to represent impossible code pathsDecodable
enum
[weak self]
and guard
userInfo
Dictionary
typealias
to its fullestforEach
reduce()
Optional
booleansSequence
nil
valuesmap()
Tips
Author: vincent-pradeilles
Source code: https://github.com/vincent-pradeilles/swift-tips
License: MIT license
#swift
1668563924
In this Machine Learning article, we learn about Machine Learning Tutorial: step by step for beginners. This Machine Learning tutorial provides both intermediate and basics of machine learning. It is designed for students and working professionals who are complete beginners. At the end of this tutorial, you will be able to make machine learning models that can perform complex tasks such as predicting the price of a house or recognizing the species of an Iris from the dimensions of its petal and sepal lengths. If you are not a complete beginner and are a bit familiar with Machine Learning, I would suggest starting with subtopic eight i.e, Types of Machine Learning.
Before we deep dive further, if you are keen to explore a course in Artificial Intelligence & Machine Learning do check out our Artificial Intelligence Courses available at Great Learning. Anyone could expect an average Salary Hike of 48% from this course. Participate in Great Learning’s career accelerate programs and placement drives and get hired by our pool of 500+ Hiring companies through our programs.
Before jumping into the tutorial, you should be familiar with Pandas and NumPy. This is important to understand the implementation part. There are no prerequisites for understanding the theory. Here are the subtopics that we are going to discuss in this tutorial:
Arthur Samuel coined the term Machine Learning in the year 1959. He was a pioneer in Artificial Intelligence and computer gaming, and defined Machine Learning as “Field of study that gives computers the capability to learn without being explicitly programmed”.
In simple terms, Machine Learning is an application of Artificial Intelligence (AI) which enables a program(software) to learn from the experiences and improve their self at a task without being explicitly programmed. For example, how would you write a program that can identify fruits based on their various properties, such as colour, shape, size or any other property?
One approach is to hardcode everything, make some rules and use them to identify the fruits. This may seem the only way and work but one can never make perfect rules that apply on all cases. This problem can be easily solved using machine learning without any rules which makes it more robust and practical. You will see how we will use machine learning to do this task in the coming sections.
Thus, we can say that Machine Learning is the study of making machines more human-like in their behaviour and decision making by giving them the ability to learn with minimum human intervention, i.e., no explicit programming. Now the question arises, how can a program attain any experience and from where does it learn? The answer is data. Data is also called the fuel for Machine Learning and we can safely say that there is no machine learning without data.
You may be wondering that the term Machine Learning has been introduced in 1959 which is a long way back, then why haven’t there been any mention of it till recent years? You may want to note that Machine Learning needs a huge computational power, a lot of data and devices which are capable of storing such vast data. We have only recently reached a point where we now have all these requirements and can practice Machine Learning.
Are you wondering how is Machine Learning different from traditional programming? Well, in traditional programming, we would feed the input data and a well written and tested program into a machine to generate output. When it comes to machine learning, input data along with the output associated with the data is fed into the machine during the learning phase, and it works out a program for itself.
Machine Learning today has all the attention it needs. Machine Learning can automate many tasks, especially the ones that only humans can perform with their innate intelligence. Replicating this intelligence to machines can be achieved only with the help of machine learning.
With the help of Machine Learning, businesses can automate routine tasks. It also helps in automating and quickly create models for data analysis. Various industries depend on vast quantities of data to optimize their operations and make intelligent decisions. Machine Learning helps in creating models that can process and analyze large amounts of complex data to deliver accurate results. These models are precise and scalable and function with less turnaround time. By building such precise Machine Learning models, businesses can leverage profitable opportunities and avoid unknown risks.
Image recognition, text generation, and many other use-cases are finding applications in the real world. This is increasing the scope for machine learning experts to shine as a sought after professionals.
A machine learning model learns from the historical data fed to it and then builds prediction algorithms to predict the output for the new set of data the comes in as input to the system. The accuracy of these models would depend on the quality and amount of input data. A large amount of data will help build a better model which predicts the output more accurately.
Suppose we have a complex problem at hand that requires to perform some predictions. Now, instead of writing a code, this problem could be solved by feeding the given data to generic machine learning algorithms. With the help of these algorithms, the machine will develop logic and predict the output. Machine learning has transformed the way we approach business and social problems. Below is a diagram that briefly explains the working of a machine learning model/ algorithm. our way of thinking about the problem.
Nowadays, we can see some amazing applications of ML such as in self-driving cars, Natural Language Processing and many more. But Machine learning has been here for over 70 years now. It all started in 1943, when neurophysiologist Warren McCulloch and mathematician Walter Pitts wrote a paper about neurons, and how they work. They decided to create a model of this using an electrical circuit, and therefore, the neural network was born.
In 1950, Alan Turing created the “Turing Test” to determine if a computer has real intelligence. To pass the test, a computer must be able to fool a human into believing it is also human. In 1952, Arthur Samuel wrote the first computer learning program. The program was the game of checkers, and the IBM computer improved at the game the more it played, studying which moves made up winning strategies and incorporating those moves into its program.
Just after a few years, in 1957, Frank Rosenblatt designed the first neural network for computers (the perceptron), which simulates the thought processes of the human brain. Later, in 1967, the “nearest neighbor” algorithm was written, allowing computers to begin using very basic pattern recognition. This could be used to map a route for travelling salesmen, starting at a random city but ensuring they visit all cities during a short tour.
But we can say that in the 1990s we saw a big change. Now work on machine learning shifted from a knowledge-driven approach to a data-driven approach. Scientists began to create programs for computers to analyze large amounts of data and draw conclusions or “learn” from the results.
In 1997, IBM’s Deep Blue became the first computer chess-playing system to beat a reigning world chess champion. Deep Blue used the computing power in the 1990s to perform large-scale searches of potential moves and select the best move. Just a decade before this, in 2006, Geoffrey Hinton created the term “deep learning” to explain new algorithms that help computers distinguish objects and text in images and videos.
The year 2012 saw the publication of an influential research paper by Alex Krizhevsky, Geoffrey Hinton, and Ilya Sutskever, describing a model that can dramatically reduce the error rate in image recognition systems. Meanwhile, Google’s X Lab developed a machine learning algorithm capable of autonomously browsing YouTube videos to identify the videos that contain cats. In 2016 AlphaGo (created by researchers at Google DeepMind to play the ancient Chinese game of Go) won four out of five matches against Lee Sedol, who has been the world’s top Go player for over a decade.
And now in 2020, OpenAI released GPT-3 which is the most powerful language model ever. It can write creative fiction, generate functioning code, compose thoughtful business memos and much more. Its possible use cases are limited only by our imaginations.
1. Automation: Nowadays in your Gmail account, there is a spam folder that contains all the spam emails. You might be wondering how does Gmail know that all these emails are spam? This is the work of Machine Learning. It recognizes the spam emails and thus, it is easy to automate this process. The ability to automate repetitive tasks is one of the biggest characteristics of machine learning. A huge number of organizations are already using machine learning-powered paperwork and email automation. In the financial sector, for example, a huge number of repetitive, data-heavy and predictable tasks are needed to be performed. Because of this, this sector uses different types of machine learning solutions to a great extent.
2. Improved customer experience: For any business, one of the most crucial ways to drive engagement, promote brand loyalty and establish long-lasting customer relationships is by providing a customized experience and providing better services. Machine Learning helps us to achieve both of them. Have you ever noticed that whenever you open any shopping site or see any ads on the internet, they are mostly about something that you recently searched for? This is because machine learning has enabled us to make amazing recommendation systems that are accurate. They help us customize the user experience. Now coming to the service, most of the companies nowadays have a chatting bot with them that are available 24×7. An example of this is Eva from AirAsia airlines. These bots provide intelligent answers and sometimes you might even not notice that you are having a conversation with a bot. These bots use Machine Learning, which helps them to provide a good user experience.
3. Automated data visualization: In the past, we have seen a huge amount of data being generated by companies and individuals. Take an example of companies like Google, Twitter, Facebook. How much data are they generating per day? We can use this data and visualize the notable relationships, thus giving businesses the ability to make better decisions that can actually benefit both companies as well as customers. With the help of user-friendly automated data visualization platforms such as AutoViz, businesses can obtain a wealth of new insights in an effort to increase productivity in their processes.
4. Business intelligence: Machine learning characteristics, when merged with big data analytics can help companies to find solutions to the problems that can help the businesses to grow and generate more profit. From retail to financial services to healthcare, and many more, ML has already become one of the most effective technologies to boost business operations.
Python provides flexibility in choosing between object-oriented programming or scripting. There is also no need to recompile the code; developers can implement any changes and instantly see the results. You can use Python along with other languages to achieve the desired functionality and results.
Python is a versatile programming language and can run on any platform including Windows, MacOS, Linux, Unix, and others. While migrating from one platform to another, the code needs some minor adaptations and changes, and it is ready to work on the new platform. To build strong foundation and cover basic concepts you can enroll in a python machine learning course that will help you power ahead your career.
Here is a summary of the benefits of using Python for Machine Learning problems:
Machine learning has been broadly categorized into three categories
Let us start with an easy example, say you are teaching a kid to differentiate dogs from cats. How would you do it?
You may show him/her a dog and say “here is a dog” and when you encounter a cat you would point it out as a cat. When you show the kid enough dogs and cats, he may learn to differentiate between them. If he is trained well, he may be able to recognize different breeds of dogs which he hasn’t even seen.
Similarly, in Supervised Learning, we have two sets of variables. One is called the target variable, or labels (the variable we want to predict) and features(variables that help us to predict target variables). We show the program(model) the features and the label associated with these features and then the program is able to find the underlying pattern in the data. Take this example of the dataset where we want to predict the price of the house given its size. The price which is a target variable depends upon the size which is a feature.
Number of rooms | Price |
1 | $100 |
3 | $300 |
5 | $500 |
In a real dataset, we will have a lot more rows and more than one features like size, location, number of floors and many more.
Thus, we can say that the supervised learning model has a set of input variables (x), and an output variable (y). An algorithm identifies the mapping function between the input and output variables. The relationship is y = f(x).
The learning is monitored or supervised in the sense that we already know the output and the algorithm are corrected each time to optimize its results. The algorithm is trained over the data set and amended until it achieves an acceptable level of performance.
We can group the supervised learning problems as:
Regression problems – Used to predict future values and the model is trained with the historical data. E.g., Predicting the future price of a house.
Classification problems – Various labels train the algorithm to identify items within a specific category. E.g., Dog or cat( as mentioned in the above example), Apple or an orange, Beer or wine or water.
This approach is the one where we have no target variables, and we have only the input variable(features) at hand. The algorithm learns by itself and discovers an impressive structure in the data.
The goal is to decipher the underlying distribution in the data to gain more knowledge about the data.
We can group the unsupervised learning problems as:
Clustering: This means bundling the input variables with the same characteristics together. E.g., grouping users based on search history
Association: Here, we discover the rules that govern meaningful associations among the data set. E.g., People who watch ‘X’ will also watch ‘Y’.
In this approach, machine learning models are trained to make a series of decisions based on the rewards and feedback they receive for their actions. The machine learns to achieve a goal in complex and uncertain situations and is rewarded each time it achieves it during the learning period.
Reinforcement learning is different from supervised learning in the sense that there is no answer available, so the reinforcement agent decides the steps to perform a task. The machine learns from its own experiences when there is no training data set present.
In this tutorial, we are going to mainly focus on Supervised Learning and Unsupervised learning as these are quite easy to understand and implement.
This may be the most time-consuming and difficult process in your journey of Machine Learning. There are many algorithms in Machine Learning and you don’t need to know them all in order to get started. But I would suggest, once you start practising Machine Learning, start learning about the most popular algorithms out there such as:
Here, I am going to give a brief overview of one of the simplest algorithms in Machine learning, the K-nearest neighbor Algorithm (which is a Supervised learning algorithm) and show how we can use it for Regression as well as for classification. I would highly recommend checking the Linear Regression and Logistic Regression as we are going to implement them and compare the results with KNN(K-nearest neighbor) algorithm in the implementation part.
You may want to note that there are usually separate algorithms for regression problems and classification problems. But by modifying an algorithm, we can use it for both classifications as well as regression as you will see below
KNN belongs to a group of lazy learners. As opposed to eager learners such as logistic regression, SVM, neural nets, lazy learners just store the training data in memory. During the training phase, KNN arranges the data (sort of indexing process) in order to find the closest neighbours efficiently during the inference phase. Otherwise, it would have to compare each new case during inference with the whole dataset making it quite inefficient.
So if you are wondering what is a training phase, eager learners and lazy learners, for now just remember that training phase is when an algorithm learns from the data provided to it. For example, if you have gone through the Linear Regression algorithm linked above, during the training phase the algorithm tries to find the best fit line which is a process that includes a lot of computations and hence takes a lot of time and this type of algorithm is called eager learners. On the other hand, lazy learners are just like KNN which do not involve many computations and hence train faster.
Now let us see how we can use K-NN for classification. Here a hypothetical dataset which tries to predict if a person is male or female (labels) on the base of the height and weight (features).
Height(cm) -feature | Weight(kg) -feature. | Gender(label) |
187 | 80 | Male |
165 | 50 | Female |
199 | 99 | Male |
145 | 70 | Female |
180 | 87 | Male |
178 | 65 | Female |
187 | 60 | Male |
Now let us plot these points:
Now we have a new point that we want to classify, given that its height is 190 cm and weight is 100 Kg. Here is how K-NN will classify this point:
Now let us apply this algorithm to our own dataset. Let us first plot the new data point.
Now let us take k=3 i.e, we will see the three closest points to the new point:
Therefore, it is classified as Male:
Now let us take the value of k=5 and see what happens:
As we can see four of the points closest to our new data point are males and just one point is female, so we go with the majority and classify it as Male again. You must always select the value of K as an odd number when doing classification.
We have seen how we can use K-NN for classification. Now, let us see what changes are made to use it for regression. The algorithm is almost the same there is just one difference. In Classification, we checked for the majority of all nearest points. Here, we are going to take the average of all the nearest points and take that as predicted value. Let us again take the same example but here we have to predict the weight(label) of a person given his height(features).
Height(cm) -feature | Weight(kg) -label |
187 | 80 |
165 | 50 |
199 | 99 |
145 | 70 |
180 | 87 |
178 | 65 |
187 | 60 |
Now we have new data point with a height of 160cm, we will predict its weight by taking the values of K as 1,2 and 4.
When K=1: The closest point to 160cm in our data is 165cm which has a weight of 50, so we conclude that the predicted weight is 50 itself.
When K=2: The two closest points are 165 and 145 which have weights equal to 50 and 70 respectively. Taking average we say that the predicted weight is (50+70)/2=60.
When K=4: Repeating the same process, now we take 4 closest points instead and hence we get 70.6 as predicted weight.
You might be thinking that this is really simple and there is nothing so special about Machine learning, it is just basic Mathematics. But remember this is the simplest algorithm and you will see much more complex algorithms once you move ahead in this journey.
At this stage, you must have a vague idea of how machine learning works, don’t worry if you are still confused. Also if you want to go a bit deep now, here is an excellent article – Gradient Descent in Machine Learning, which discusses how we use an optimization technique called as gradient descent to find a best-fit line in linear regression.
There are plenty of machine learning algorithms and it could be a tough task to decide which algorithm to choose for a specific application. The choice of the algorithm will depend on the objective of the problem you are trying to solve.
Let us take an example of a task to predict the type of fruit among three varieties, i.e., apple, banana, and orange. The predictions are based on the colour of the fruit. The picture depicts the results of ten different algorithms. The picture on the top left is the dataset. The data is classified into three categories: red, light blue and dark blue. There are some groupings. For instance, from the second image, everything in the upper left belongs to the red category, in the middle part, there is a mixture of uncertainty and light blue while the bottom corresponds to the dark category. The other images show different algorithms and how they try to classified the data.
I wish Machine learning was just applying algorithms on your data and get the predicted values but it is not that simple. There are several steps in Machine Learning which are must for each project.
For evaluating the model, we hold out a portion of data called test data and do not use this data to train the model. Later, we use test data to evaluate various metrics.
The results of predictive models can be viewed in various forms such as by using confusion matrix, root-mean-squared error(RMSE), AUC-ROC etc.
TP (True Positive) is the number of values predicted to be positive by the algorithm and was actually positive in the dataset. TN represents the number of values that are expected to not belong to the positive class and actually do not belong to it. FP depicts the number of instances misclassified as belonging to the positive class thus is actually part of the negative class. FN shows the number of instances classified as the negative class but should belong to the positive class.
Now in Regression problem, we usually use RMSE as evaluation metrics. In this evaluation technique, we use the error term.
Let’s say you feed a model some input X and the model predicts 10, but the actual value is 5. This difference between your prediction (10) and the actual observation (5) is the error term: (f_prediction – i_actual). The formula to calculate RMSE is given by:
Where N is a total number of samples for which we are calculating RMSE.
In a good model, the RMSE should be as low as possible and there should not be much difference between RMSE calculated over training data and RMSE calculated over the testing set.
Although there are many languages that can be used for machine learning, according to me, Python is hands down the best programming language for Machine Learning applications. This is due to the various benefits mentioned in the section below. Other programming languages that could to use for Machine Learning Applications are R, C++, JavaScript, Java, C#, Julia, Shell, TypeScript, and Scala. R is also a really good language to get started with machine learning.
Python is famous for its readability and relatively lower complexity as compared to other programming languages. Machine Learning applications involve complex concepts like calculus and linear algebra which take a lot of effort and time to implement. Python helps in reducing this burden with quick implementation for the Machine Learning engineer to validate an idea. You can check out the Python Tutorial to get a basic understanding of the language. Another benefit of using Python in Machine Learning is the pre-built libraries. There are different packages for a different type of applications, as mentioned below:
Before moving on to the implementation of machine learning with Python part, you need to download some important software and libraries. Anaconda is an open-source distribution that makes it easy to perform Python/R data science and machine learning on a single machine. It contains all most all the libraries that are needed by us. In this tutorial, we are mostly going to use the scikit-learn library which is a free software machine learning library for the Python programming language.
Now, we are going to implement all that we learnt till now. We will solve a Regression problem and then a Classification problem using the seven steps mentioned above.
Implementation of a Regression problem
We have a problem of predicting the prices of the house given some features such as size, number of rooms and many more. So let us get started:
The dataset we are using is called the Boston Housing dataset. Each record in the database describes a Boston suburb or town. The data was drawn from the Boston Standard Metropolitan Statistical Area (SMSA) in 1970. The attributes are defined as follows (taken from the UCI Machine Learning Repository).
Here is a link to download this dataset.
Now after opening the file you can see the data about House sales. This dataset is not in a proper tabular form, in fact, there are no column names and each value is separated by spaces. We are going to use Pandas to put it in proper tabular form. We will provide it with a list containing column names and also use delimiter as ‘\s+’ which means that after encounterings a single or multiple spaces, it can differentiate every single entry.
We are going to import all the necessary libraries such as Pandas and NumPy. Next, we will import the data file which is in CSV format into a pandas DataFrame.
import numpy as np
import pandas as pd
column_names = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX','PTRATIO', 'B', 'LSTAT', 'MEDV']
bos1 = pd.read_csv('housing.csv', delimiter=r"\s+", names=column_names)
2. Preprocess Data: The next step is to pre-process the data. Now for this dataset, we can see that there are no NaN (missing) values and also all the data is in numbers rather than strings so we won’t face any errors when training the model. So let us just divide our data into training data and testing data such that 70% of data is training data and the rest is testing data. We could also scale our data to make the predictions much accurate but for now, let us keep it simple.
bos1.isna().sum()
from sklearn.model_selection import train_test_split
X=np.array(bos1.iloc[:,0:13])
Y=np.array(bos1["MEDV"])
#testing data size is of 30% of entire data
x_train, x_test, y_train, y_test =train_test_split(X,Y, test_size = 0.30, random_state =5)
3. Choose a Model: For this particular problem, we are going to use two algorithms of supervised learning that can solve regression problems and later compare their results. One algorithm is K-NN (K-nearest Neighbor) which is explained above and the other is Linear Regression. I would highly recommend to check it out in case you haven’t already.
from sklearn.linear_model import LinearRegression
from sklearn.neighbors import KNeighborsRegressor
#load our first model
lr = LinearRegression()
#train the model on training data
lr.fit(x_train,y_train)
#predict the testing data so that we can later evaluate the model
pred_lr = lr.predict(x_test)
#load the second model
Nn=KNeighborsRegressor(3)
Nn.fit(x_train,y_train)
pred_Nn = Nn.predict(x_test)
4. Hyperparameter Tuning: Since this is a beginners tutorial, here, I am only going to turn the value ok K in the K-NN model. I will just use a for loop and check results of k ranging from 1 to 50. K-NN is extremely fast on small dataset like ours so it won’t take any time. There are much more advanced methods of doing this which you can find linked in the steps of Machine Learning section above.
import sklearn
for i in range(1,50):
model=KNeighborsRegressor(i)
model.fit(x_train,y_train)
pred_y = model.predict(x_test)
mse = sklearn.metrics.mean_squared_error(y_test, pred_y,squared=False)
print("{} error for k = {}".format(mse,i))
Output:
From the output, we can see that error is least for k=3, so that should justify why I put the value of K=3 while training the model
5. Evaluating the model: For evaluating the model we are going to use the mean_squared_error() method from the scikit-learn library. Remember to set the parameter ‘squared’ as False, to get the RMSE error.
#error for linear regression
mse_lr= sklearn.metrics.mean_squared_error(y_test, pred_lr,squared=False)
print("error for Linear Regression = {}".format(mse_lr))
#error for linear regression
mse_Nn= sklearn.metrics.mean_squared_error(y_test, pred_Nn,squared=False)
print("error for K-NN = {}".format(mse_Nn))
Now from the results, we can conclude that Linear Regression performs better than K-NN for this particular dataset. But It is not necessary that Linear Regression would always perform better than K-NN as it completely depends upon the data that we are working with.
6. Prediction: Now we can use the models to predict the prices of the houses using the predict function as we did above. Make sure when predicting the prices that we are given all the features that were present when training the model.
Here is the whole script:
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.neighbors import KNeighborsRegressor
column_names = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV']
bos1 = pd.read_csv('housing.csv', delimiter=r"\s+", names=column_names)
X=np.array(bos1.iloc[:,0:13])
Y=np.array(bos1["MEDV"])
#testing data size is of 30% of entire data
x_train, x_test, y_train, y_test =train_test_split(X,Y, test_size = 0.30, random_state =54)
#load our first model
lr = LinearRegression()
#train the model on training data
lr.fit(x_train,y_train)
#predict the testing data so that we can later evaluate the model
pred_lr = lr.predict(x_test)
#load the second model
Nn=KNeighborsRegressor(12)
Nn.fit(x_train,y_train)
pred_Nn = Nn.predict(x_test)
#error for linear regression
mse_lr= sklearn.metrics.mean_squared_error(y_test, pred_lr,squared=False)
print("error for Linear Regression = {}".format(mse_lr))
#error for linear regression
mse_Nn= sklearn.metrics.mean_squared_error(y_test, pred_Nn,squared=False)
print("error for K-NN = {}".format(mse_Nn))
In this section, we will solve the population classification problem known as Iris Classification problem. The Iris dataset was used in R.A. Fisher’s classic 1936 paper, The Use of Multiple Measurements in Taxonomic Problems, and can also be found on the UCI Machine Learning Repository.
It includes three iris species with 50 samples each as well as some properties about each flower. One flower species is linearly separable from the other two, but the other two are not linearly separable from each other. The columns in this dataset are:
Different species of iris
We don’t need to download this dataset as scikit-learn library already contains this dataset and we can simply import it from there. So let us start coding this up:
from sklearn.datasets import load_iris
iris = load_iris()
X=iris.data
Y=iris.target
print(X)
print(Y)
As we can see, the features are in a list containing four items which are the features and at the bottom, we got a list containing labels which have been transformed into numbers as the model cannot understand names that are strings, so we encode each name as a number. This has already done by the scikit learn developers.
from sklearn.model_selection import train_test_split
#testing data size is of 30% of entire data
x_train, x_test, y_train, y_test =train_test_split(X,Y, test_size = 0.3, random_state =5)
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
#fitting our model to train and test
Nn = KNeighborsClassifier(8)
Nn.fit(x_train,y_train)
#the score() method calculates the accuracy of model.
print("Accuracy for K-NN is ",Nn.score(x_test,y_test))
Lr = LogisticRegression()
Lr.fit(x_train,y_train)
print("Accuracy for Logistic Regression is ",Lr.score(x_test,y_test))
1. Easily identifies trends and patterns
Machine Learning can review large volumes of data and discover specific trends and patterns that would not be apparent to humans. For instance, for e-commerce websites like Amazon and Flipkart, it serves to understand the browsing behaviors and purchase histories of its users to help cater to the right products, deals, and reminders relevant to them. It uses the results to reveal relevant advertisements to them.
2. Continuous Improvement
We are continuously generating new data and when we provide this data to the Machine Learning model which helps it to upgrade with time and increase its performance and accuracy. We can say it is like gaining experience as they keep improving in accuracy and efficiency. This lets them make better decisions.
3. Handling multidimensional and multi-variety data
Machine Learning algorithms are good at handling data that are multidimensional and multi-variety, and they can do this in dynamic or uncertain environments.
4. Wide Applications
You could be an e-tailer or a healthcare provider and make Machine Learning work for you. Where it does apply, it holds the capability to help deliver a much more personal experience to customers while also targeting the right customers.
1. Data Acquisition
Machine Learning requires a massive amount of data sets to train on, and these should be inclusive/unbiased, and of good quality. There can also be times where we must wait for new data to be generated.
2. Time and Resources
Machine Learning needs enough time to let the algorithms learn and develop enough to fulfill their purpose with a considerable amount of accuracy and relevancy. It also needs massive resources to function. This can mean additional requirements of computer power for you.
3. Interpretation of Results
Another major challenge is the ability to accurately interpret results generated by the algorithms. You must also carefully choose the algorithms for your purpose. Sometimes, based on some analysis you might select an algorithm but it is not necessary that this model is best for the problem.
4. High error-susceptibility
Machine Learning is autonomous but highly susceptible to errors. Suppose you train an algorithm with data sets small enough to not be inclusive. You end up with biased predictions coming from a biased training set. This leads to irrelevant advertisements being displayed to customers. In the case of Machine Learning, such blunders can set off a chain of errors that can go undetected for long periods of time. And when they do get noticed, it takes quite some time to recognize the source of the issue, and even longer to correct it.
Machine Learning can be a competitive advantage to any company, be it a top MNC or a startup. As things that are currently being done manually will be done tomorrow by machines. With the introduction of projects such as self-driving cars, Sophia(a humanoid robot developed by Hong Kong-based company Hanson Robotics) we have already started a glimpse of what the future can be. The Machine Learning revolution will stay with us for long and so will be the future of Machine Learning.
How do I start learning Machine Learning?
You first need to start with the basics. You need to understand the prerequisites, which include learning Linear Algebra and Multivariate Calculus, Statistics, and Python. Then you need to learn several ML concepts, which include terminology of Machine Learning, types of Machine Learning, and Resources of Machine Learning. The third step is taking part in competitions. You can also take up a free online statistics for machine learning course and understand the foundational concepts.
Is Machine Learning easy for beginners?
Machine Learning is not the easiest. The difficulty in learning Machine Learning is the debugging problem. However, if you study the right resources, you will be able to learn Machine Learning without any hassles.
What is a simple example of Machine Learning?
Recommendation Engines (Netflix); Sorting, tagging and categorizing photos (Yelp); Customer Lifetime Value (Asos); Self-Driving Cars (Waymo); Education (Duolingo); Determining Credit Worthiness (Deserve); Patient Sickness Predictions (KenSci); and Targeted Emails (Optimail).
Can I learn Machine Learning in 3 months?
Machine Learning is vast and consists of several things. Therefore, it will take you around six months to learn it, provided you spend at least 5-6 days every day. Also, the time taken to learn Machine Learning depends a lot on your mathematical and analytical skills.
Does Machine Learning require coding?
If you are learning traditional Machine Learning, it would require you to know software programming as it will help you to write machine learning algorithms. However, through some online educational platforms, you do not need to know coding to learn Machine Learning.
Is Machine Learning a good career?
Machine Learning is one of the best careers at present. Whether it is for the current demand, job, and salary growth, Machine Learning Engineer is one of the best profiles. You need to be very good at data, automation, and algorithms.
Can I learn Machine Learning without Python?
To learn Machine Learning, you need to have some basic knowledge of Python. A version of Python that is supported by all Operating Systems such as Windows, Linux, etc., is Anaconda. It offers an overall package for machine learning, including matplotlib, scikit-learn, and NumPy.
Where can I practice Machine Learning?
The online platforms where you can practice Machine Learning include CloudXLab, Google Colab, Kaggle, MachineHack, and OpenML.
Where can I learn Machine Learning for free?
You can learn the basics of Machine Learning from online platforms like Great Learning. You can enroll in the Beginners Machine Learning course and get the certificate for free. The course is easy and perfect for beginners to start with.
Original article source at: https://www.mygreatlearning.com