Migrate Scala 2.13 Project to Scala 3

Migrate Scala 2.13 Project to Scala 3

Are you a Scala developer looking to migrate your existing Scala 2.13 projects to the latest version of the language? If so, you’ll be happy to know that Scala 3 is now available and comes with a range of new features and improvements. With its streamlined syntax, improved performance, and better compatibility with Java 8 and above, Scala 3 offers a host of benefits for developers working with the language.

However, migrating to a new major version of any programming language can be a daunting task, and Scala 3 is no exception. But don’t worry – we’ve got you covered. In this blog post, we’ll provide you with a step-by-step guide to help you migrate your projects from Scala 2.13 to Scala 3 using the Scala 3 Migrate Plugin. Whether you’re interested in the new features of Scala 3 or just looking to stay up-to-date with the latest version of the language, this guide is for you.

So, let’s get started and take your Scala development to the next level with Scala 3.

Scala 3 Migrate Plugin

The Scala 3 Migrate Plugin is a valuable tool that can help you migrate your codebase to Scala 3. It has been designed to make the migration to scala 3 as easy as possible. It provides a set of automated tools and manual suggestions to make the process as smooth and painless as possible.

The migration process consists of four independent steps that are packaged into an sbt plugin:

  1. migrate-libs: This step helps you update the list of library dependencies in your build file to use the corresponding Scala 3 versions of your dependencies. It ensures that your project’s dependencies are compatible with Scala 3 and can be resolved correctly during the build process.
  2. migrate-scalacOptions: This step helps you update the list of compiler options (scalacOptions) in your build file to use the corresponding Scala 3 options. It ensures that the compiler is using the correct set of options for Scala 3, which can help improve the quality and performance of your code.
  3. migrate-syntax: This step fixes a number of syntax incompatibilities in your Scala 2.13 code so that it can be compiled in Scala 3. It handles common syntax changes between the two versions of Scala and can help you quickly fix issues that would otherwise require significant manual changes.
  4. migrate: This step tries to make your code compile with Scala 3 by adding the minimum required inferred types and implicit arguments. It automates the process of making your code compatible with Scala 3 and can help you quickly identify issues that would otherwise require significant manual changes.

Each of these steps is an sbt command that we will understand in detail in the following sections. So make sure to run them in an sbt shell.

Prerequisites

Before using the scala3-migrate plugin, you’ll need to make sure that your development environment meets the following prerequisites:

  1. SBT 1.5 or later: You’ll need to be using SBT as your build tool, and have a version of 1.5 or later installed on your system.
  2. Java 8 or later: The scala3-migrate plugin requires Java 8 or later to run. Make sure it is installed on your system.
  3. Scala 2.13: The scala3-migrate plugin requires Scala 2.13(preferred 2.13.5) to work correctly. If you’re using an earlier version of Scala, you’ll need to upgrade first.

By ensuring that your development environment meets these prerequisites, you’ll be able to use the scala3-migrate plugin with confidence and make a smooth transition to Scala 3.

Installation

You can install the scala3-migrate plugin by adding it to your plugins.sbt file:

addSbtPlugin("ch.epfl.scala" % "sbt-scala3-migrate" % "0.5.1")

Choosing a Module to Migrate

scala3-migrate plugin operates on one module at a time. So for projects with multiple modules, the first step is to choose the right one to migrate first.

Choosing the right module to migrate is an important first step in the process of migrating to Scala 3. Here are a few considerations to help you decide which module to migrate first:

  • Start with a small module: Migrating a large codebase all at once can be overwhelming, so it’s best to start with a small, self-contained module that is easy to test and debug. This will allow you to gain confidence in the migration process before tackling larger and more complex modules.
  • Choose a module with clear dependencies: Look for a module that has clear dependencies and is less likely to have complex interactions with other parts of your codebase. This will make it easier to identify any issues that arise during the migration process and ensure that you’re not introducing new bugs or breaking existing functionality.
  • Select a module that uses fewer language features: Some Scala 2 language features have been removed or changed in Scala 3, so it’s best to start with a module that uses fewer of these features. This will make it easier to identify and fix any issues related to the changes in the language.
  • Select a module that is actively developed: It’s a good idea to select a module that is currently under active development, as this will give you the opportunity to address any issues that arise during the migration process as part of your regular development workflow.

Consider these factors to choose a suitable module for migration and gain confidence before tackling more complex code.

Note:

Make sure the module you choose is not an aggregate project, otherwise only its own sources will be migrated, not the sources of its subprojects.

Migrate library dependencies

command: migrate-libs projectId

Migrating library dependencies is an important step in upgrading a Scala 2.13 project to Scala 3. Library dependencies can include external packages, plugins, and other code that your project relies on. Fortunately, the scala3-migrate plugin provides the migrate-libs projectId command(where projectId is the name of the module chosen to be migrated), which can help you to update your library dependencies to be compatible with Scala 3.

Let’s consider the following sbt build that is supposed to be migrated:

//build.sbt
val akkaHttpVersion = "10.2.4"
val akkaVersion = "2.6.5"
val jdbcAndLiftJsonVersion = "3.4.1"
val flywayCore = "3.2.1"
val keycloakVersion = "4.0.0.Final"

scapegoatVersion in ThisBuild := "1.4.8"

lazy val ticketService = project
  .in(file("."))
  .settings(
    name := "ticket-service",
    scalaVersion := "2.13.6",
    semanticdbEnabled := true,
    scalacOptions ++= Seq("-explaintypes", "-Wunused"),
    libraryDependencies ++= Seq(
      "com.typesafe.akka" %% "akka-http" % akkaHttpVersion,
      "com.typesafe.akka" %% "akka-stream" % akkaVersion,
      "net.liftweb" %% "lift-json" % jdbcAndLiftJsonVersion,
      "org.postgresql" % "postgresql" % "42.2.11",
      "org.scalikejdbc" %% "scalikejdbc" % jdbcAndLiftJsonVersion,
      "ch.qos.logback" % "logback-classic" % "1.2.3",
      "com.typesafe.scala-logging" %% "scala-logging" % "3.9.3",
      "ch.megard" %% "akka-http-cors" % "0.4.3",
      "org.apache.commons" % "commons-io" % "1.3.2",
      "org.fusesource.jansi" % "jansi" % "1.12",
      "com.google.api-client" % "google-api-client" % "1.30.9",
      "com.google.apis" % "google-api-services-sheets" % "v4-rev1-1.21.0",
      "com.google.apis" % "google-api-services-admin-directory" % "directory_v1-rev20191003-1.30.8",
      "com.google.oauth-client" % "google-oauth-client-jetty" % "1.30.5",
      "com.google.auth" % "google-auth-library-oauth2-http" % "1.3.0",
      // test lib
      "com.typesafe.akka" %% "akka-stream-testkit" % akkaVersion % Test,
      "com.typesafe.akka" %% "akka-http-testkit" % akkaHttpVersion % Test,
      "com.typesafe.akka" %% "akka-http-spray-json" % akkaHttpVersion,
      "org.scalatest" %% "scalatest" % "3.1.0" % Test,
      "org.mockito" %% "mockito-scala" % "1.11.4" % Test,
      "com.typesafe.akka" %% "akka-testkit" % akkaVersion % Test,
      "com.h2database" % "h2" % "1.4.196",
      //flyway
      "org.flywaydb" % "flyway-core" % flywayCore,
      //swagger-akka-http
      "com.github.swagger-akka-http" %% "swagger-akka-http" % "2.4.2",
      "com.github.swagger-akka-http" %% "swagger-scala-module" % "2.3.1",
      //javax
      "javax.ws.rs" % "javax.ws.rs-api" % "2.0.1",
      "org.keycloak" % "keycloak-core" % keycloakVersion,
      "org.keycloak" % "keycloak-adapter-core" % keycloakVersion,
      "com.github.jwt-scala" %% "jwt-circe" % "9.0.1",
      "org.jboss.logging" % "jboss-logging" % "3.3.0.Final" % Runtime,
      "org.keycloak" % "keycloak-admin-client" % "12.0.2",
      "com.rabbitmq" % "amqp-client" % "5.12.0",
      "org.apache.commons" % "commons-text" % "1.9",
      "org.typelevel" %% "cats-core" % "2.3.0"
    )
  )

Next, we’ll run the command and see the output:

Output

The output lists project dependencies with their current version and required Scala 3-compatible version.

The Valid status indicates that the current version of the dependency is compatible with Scala 3. In contrast, the X status indicates that the dependency is not compatible with the Scala 3 version. The To be updated status displays the latest Scala 3 compatible version of the dependency.

In the given result, it appears that several dependencies are already valid and doesn’t require any updates. However, some dependencies require a specific Scala 3 compatible version, while others cannot be updated to Scala 3 at all.

For example, com.sksamuel.scapegoat:scalac-scapegoat-plugin:1.4.8:provided is marked with an X status, indicating that it is not compatible with Scala 3 and you need to remove it and find an alternative for the same. Moreover, the output suggests that the dependency ch.megard:akka-http-cors:0.4.3 should be updated to "ch.megard" %% "akka-http-cors" % "1.1.3", as the latter version is compatible with Scala 3.

In addition, some dependencies have a cross label next to them, indicating that they need to be used with a specific cross-versioning scheme, as they are not fully compatible with Scala 3. For example, the net.liftweb:lift-json:3.4.1 dependency needs to be used with the cross-versioning scheme CrossVersion.for3Use2_13, as it is only safe to use the 2.13 version if it’s inside an application.

Overall, this output can help identify which dependencies to update or remove when migrating to Scala 3. By following this migration guide, you can ensure that all the dependencies in your project are compatible with Scala 3.

Once you have applied all the changes mentioned in the above output, run the migrate-libs command again. All project dependencies with Valid status indicate successful migration of library dependencies to Scala 3.

Migrate scalacOptions

command: migrate-scalacOptions projectId

The next step for migration is to update the project’s Scala compiler options(scalacOptions) to work with Scala 3.

The Scala compiler options are flags that control the compiler’s behavior when passed to the Scala compiler. These flags can affect the code generation, optimization, and error reporting of the compiler.

In Scala 3, some of the compiler options have been renamed or removed, while others have been added. Therefore, it is important to review and update the scalacOptions when migrating from Scala 2.13 to Scala 3.

To perform this step, we’ll run the migrate-scalacOptions command which displays the following output:

The output shows a list of scalacOptions that were found in the project and indicate whether each option is still valid, has been renamed, or is no longer available in Scala 3.

For instance, the line -Wunused -> X indicates that the -Wunused option is not available in Scala 3 and needs to be removed. On the other hand, -explaintypes -> -explain-types shows that the -explaintypes option has been renamed to -explain-types and can still be used in Scala 3. So you just need to rename this scalacOption.

Some scalacOptions are not set by you in the build file but by some sbt plugins. For example, scala3-migrate tool enables semanticdb in Scala 2, which adds -Yrangepos option. Here sbt will adapt the semanticdb options in Scala 3. Therefore, all the information specific to the sbt plugins displayed by migrate-scalacOption can be ignored if the previous step has been followed successfully.

Overall, the output is intended to help you identify which scalacOptions need to be updated or removed in order to migrate the project to Scala 3.

After applying the suggested changes, the updated scalacOptions in the build looks like this:

scalacOptions ++=

      (if (scalaVersion.value.startsWith("3"))

        Seq("-explain-types")

      else Seq("-explaintypes", "-Wunused"))

Migrate the syntax

command: migrate-syntax projectId

This step is to fix the syntax incompatibilities that may arise when migrating code from Scala 2.13 to Scala 3. An incompatibility is a piece of code that compiles in Scala 2.13 but does not compile in Scala 3. Migrating a code base involves finding and fixing all the incompatibilities of the source code.

The command migrate-syntax is used to perform this step and fixes a number of syntax incompatibilities by applying the following Scalafix rules:

  • ProcedureSyntax
  • fix.scala213.ConstructorProcedureSyntax
  • fix.scala213.ExplicitNullaryEtaExpansion
  • fix.scala213.ParensAroundLambda
  • fix.scala213.ExplicitNonNullaryApply
  • fix.scala213.Any2StringAdd

This command is very useful in making the syntax migration process more efficient and less error-prone. By automatically identifying and fixing syntax incompatibilities, time and effort are saved from manual code changes.

Note that the migrate-syntax command is not guaranteed to fix all syntax incompatibilities. It is still necessary to manually review and update any remaining issues that the tool may have missed.

Let’s run the command and check the output:

The output displays a list of files that previously had syntax incompatibilities and are now fixed after running this command.

Migrate the code: the final step

command: migrate projectId

The final step in the migration process is to use the migrate command to make your code compile with Scala 3.

The new type inference algorithm in Scala 3 allows its compiler to infer a different type than Scala 2.13’s compiler. This command attempts to compile your code in Scala 3 by adding the minimum required inferred types and implicit arguments.

When you run the migrate command, it will generate a report that lists any errors or warnings encountered during the compilation process. This report identifies areas of your code needing modification for compatibility with Scala 3.

Overall, the migrate command is an essential tool for the final step in the migration process to Scala 3. It automatically identifies migration issues and ensures full compatibility with Scala 3.

Let’s run the command and see the output:

The output indicates that the project has been successfully migrated to Scala 3.1.1.

If your project has multiple modules, repeat the same migration steps for each of them. Once you’ve finished migrating each module, remove the scala3-migrate plugin from your project and update the Scala version to 3.1.1(or add this version to crossScalaVersions).

Conclusion

In conclusion, the process of migrating a Scala 2.13 project to Scala 3 can be made much simpler with the use of the scala3-migrate plugin. The plugin automates many migration changes, such as syntax incompatibilities and updating deprecated code. It also provides helpful diagnostics and suggestions for manual changes that are needed. However, it is still important to manually review and test changes to ensure the project runs correctly after migration. Careful planning and attention to detail ensure a successful migration to Scala 3, providing access to new features and benefits.

That’s it for this blog post. I hope that the information provided has been helpful and informative.

Additionally, if you found this post valuable, please share it with your friends, and colleagues, or on social media. Sharing information is a great way to help others and build a community of like-minded individuals.

To access more fascinating articles on Scala or any other cutting-edge technologies, visit Knoldus Blogs.

Finally, remember to keep learning and growing. With the vast amount of information available today, there’s always something new to discover and explore. So keep an open mind, stay curious, and never stop seeking knowledge.

Original article source at: https://blog.knoldus.com/

#scala #migrate 

Migrate Scala 2.13 Project to Scala 3
Lawrence  Lesch

Lawrence Lesch

1674921010

Umzug: Framework Agnostic Migration tool for Node.js

Umzug

Umzug is a framework-agnostic migration tool for Node. It provides a clean API for running and rolling back tasks.

Highlights

  • Written in TypeScript
    • Built-in typings
    • Auto-completion right in your IDE
    • Documentation right in your IDE
  • Programmatic API for migrations
  • Built-in CLI
  • Database agnostic
  • Supports logging of migration process
  • Supports multiple storages for migration data
  • Usage examples

Documentation

Note: these are the docs for the latest version of umzug, which has several breaking changes from v2.x. See the upgrading section for a migration guide. For the previous stable version, please refer to the v2.x branch.

Minimal Example

The following example uses a Sqlite database through sequelize and persists the migration data in the database itself through the sequelize storage. There are several more involved examples covering a few different scenarios in the examples folder. Note that although this uses Sequelize, Umzug isn't coupled to Sequelize, it's just one of the (most commonly-used) supported storages.

// index.js
const { Sequelize } = require('sequelize');
const { Umzug, SequelizeStorage } = require('umzug');

const sequelize = new Sequelize({ dialect: 'sqlite', storage: './db.sqlite' });

const umzug = new Umzug({
  migrations: { glob: 'migrations/*.js' },
  context: sequelize.getQueryInterface(),
  storage: new SequelizeStorage({ sequelize }),
  logger: console,
});

(async () => {
  // Checks migrations and run them if they are not already applied. To keep
  // track of the executed migrations, a table (and sequelize model) called SequelizeMeta
  // will be automatically created (if it doesn't exist already) and parsed.
  await umzug.up();
})();
// migrations/00_initial.js

const { Sequelize } = require('sequelize');

async function up({ context: queryInterface }) {
    await queryInterface.createTable('users', {
        id: {
            type: Sequelize.INTEGER,
            allowNull: false,
            primaryKey: true
        },
        name: {
            type: Sequelize.STRING,
            allowNull: false
        },
        createdAt: {
            type: Sequelize.DATE,
            allowNull: false
        },
        updatedAt: {
            type: Sequelize.DATE,
            allowNull: false
        }
    });
}

async function down({ context: queryInterface }) {
    await queryInterface.dropTable('users');
}

module.exports = { up, down };

Note that we renamed the context argument to queryInterface for clarity. The context is whatever we specified when creating the Umzug instance in index.js.

You can also write your migrations in typescript by using `ts-node` in the entrypoint:

// index.ts
require('ts-node/register')

import { Sequelize } from 'sequelize';
import { Umzug, SequelizeStorage } from 'umzug';

const sequelize = new Sequelize({ dialect: 'sqlite', storage: './db.sqlite' });

const umzug = new Umzug({
  migrations: { glob: 'migrations/*.ts' },
  context: sequelize.getQueryInterface(),
  storage: new SequelizeStorage({ sequelize }),
  logger: console,
});

// export the type helper exposed by umzug, which will have the `context` argument typed correctly
export type Migration = typeof umzug._types.migration;

(async () => {
  await umzug.up();
})();
// migrations/00_initial.ts
import type { Migration } from '..';

// types will now be available for `queryInterface`
export const up: Migration = ({ context: queryInterface }) => queryInterface.createTable(...)
export const down: Migration = ({ context: queryInterface }) => queryInterface.dropTable(...)

See these tests for more examples of Umzug usage, including:

  • passing ignore and cwd parameters to the glob instructions
  • customising migrations ordering
  • finding migrations from multiple different directories
  • using non-js file extensions via a custom resolver, e.g. .sql

Usage

Installation

Umzug is available on npm by specifying the correct tag:

npm install umzug

Umzug instance

It is possible to configure an Umzug instance by passing an object to the constructor.

const { Umzug } = require('umzug');
const umzug = new Umzug({ /* ... options ... */ });

Detailed documentation for these options are in the UmzugOptions TypeScript interface, which can be found in src/types.ts.

Getting all pending migrations

You can get a list of pending (i.e. not yet executed) migrations with the pending() method:

const migrations = await umzug.pending();
// returns an array of all pending migrations.

Getting all executed migrations

You can get a list of already executed migrations with the executed() method:

const migrations = await umzug.executed();
// returns an array of all already executed migrations

Executing pending migrations

The up method can be used to execute all pending migrations.

const migrations = await umzug.up();
// returns an array of all executed migrations

It is also possible to pass the name of a migration in order to just run the migrations from the current state to the passed migration name (inclusive).

await umzug.up({ to: '20141101203500-task' });

To limit the number of migrations that are run, step can be used:

// This will run the next two migrations
await umzug.up({ step: 2 })

Running specific migrations while ignoring the right order, can be done like this:

await umzug.up({ migrations: ['20141101203500-task', '20141101203501-task-2'] });

Reverting executed migration

The down method can be used to revert the last executed migration.

const migration = await umzug.down();
// reverts the last migration and returns it.

To revert more than one migration, you can use step:

// This will revert the last two migrations
await umzug.down({ step: 2 });

It is possible to pass the name of a migration until which (inclusive) the migrations should be reverted. This allows the reverting of multiple migrations at once.

const migrations = await umzug.down({ to: '20141031080000-task' });
// returns an array of all reverted migrations.

To revert all migrations, you can pass 0 as the to parameter:

await umzug.down({ to: 0 });

Reverting specific migrations while ignoring the right order, can be done like this:

await umzug.down({ migrations: ['20141101203500-task', '20141101203501-task-2'] });

Migrations

There are two ways to specify migrations: via files or directly via an array of migrations.

Migration files

A migration file ideally exposes an up and a down async functions. They will perform the task of upgrading or downgrading the database.

module.exports = {
  async up() {
    /* ... */
  },
  async down() {
    /* ... */
  }
};

Migration files can be located anywhere - they will typically be loaded according to a glob pattern provided to the Umzug constructor.

Direct migrations list

You can also specify directly a list of migrations to the Umzug constructor:

const { Umzug } = require('umzug');

const umzug = new Umzug({
  migrations: [
    {
      // the name of the migration is mandatory
      name: '00-first-migration',
      async up({ context }) { /* ... */ },
      async down({ context }) { /* ... */ }
    },
    {
      name: '01-foo-bar-migration',
      async up({ context }) { /* ... */ },
      async down({ context }) { /* ... */ }
    }
  ],
  context: sequelize.getQueryInterface(),
  logger: console,
});

Modifying the parameters passed to your migration methods

Sometimes it's necessary to modify the parameters umzug will pass to your migration methods when the library calls the up and down methods for each migration. This is the case when using migrations currently generated using sequilize-cli. In this case you can use the resolve fuction during migration configuration to determine which parameters will be passed to the relevant method

import { Sequelize } from 'sequelize'
import { Umzug, SequelizeStorage } from 'umzug'

const sequelize = new Sequelize(
    ...
)

const umzug = new Umzug({
    migrations: {
        glob: 'migrations/*.js',
        resolve: ({ name, path, context }) => {
            const migration = require(path)
            return {
                // adjust the parameters Umzug will
                // pass to migration methods when called
                name,
                up: async () => migration.up(context, Sequelize),
                down: async () => migration.down(context, Sequelize),
            }
        },
    },
    context: sequelize.getQueryInterface(),
    storage: new SequelizeStorage({ sequelize }),
    logger: console,
});

Additional migration configuration options

To load migrations in another format, you can use the resolve function:

const { Umzug } = require('umzug')
const { Sequelize } = require('sequelize')
const fs = require('fs')

const umzug = new Umzug({
  migrations: {
    glob: 'migrations/*.up.sql',
    resolve: ({ name, path, context: sequelize }) => ({
      name,
      up: async () => {
        const sql = fs.readFileSync(path).toString()
        return sequelize.query(sql)
      },
      down: async () => {
        // Get the corresponding `.down.sql` file to undo this migration
        const sql = fs.readFileSync(path.replace('.up.sql', '.down.sql')).toString()
        return sequelize.query(sql)
      }
    })
  },
  context: new Sequelize(...),
  logger: console,
});

You can support mixed migration file types, and use umzug's default resolver for javascript/typescript:

const { Umzug } = require('umzug')
const { Sequelize } = require('sequelize')
const fs = require('fs')

const umzug = new Umzug({
  migrations: {
    glob: 'migrations/*.{js,ts,up.sql}',
    resolve: (params) => {
      if (!params.path.endsWith('.sql')) {
        return Umzug.defaultResolver(params)
      }
      const { context: sequelize } = params
      return {
        name: params.name,
        up: async () => {
          const sql = fs.readFileSync(params.path).toString()
          return sequelize.query(sql)
        },
        down: async () => {
          // Get the corresponding `.down.sql` file to undo this migration
          const sql = fs.readFileSync(params.path.replace('.up.sql', '.down.sql')).toString()
          return sequelize.query(sql)
        }
      }
    },
  },
  logger: console,
  context: new Sequelize(...),
});

The glob syntax allows loading migrations from multiple locations:

const { Umzug } = require('umzug')
const { Sequelize } = require('sequelize')

const umzug = new Umzug({
  migrations: {
    glob: '{first-folder/*.js,second-folder-with-different-naming-convention/*.js}',
  },
  context: new Sequelize(...),
  logger: console,
});

Note on migration file sorting:

  • file matches, found using glob, will be lexicographically sorted based on their paths
    • so if your migrations are one/m1.js, two/m2.js, three/m3.js, the resultant order will be one/m1.js, three/m3.js, two/m2.js
    • similarly, if your migrations are called m1.js, m2.js, ... m10.js, m11.js, the resultant ordering will be m1.js, m10.js, m11.js, ... m2.js
  • The easiest way to deal with this is to ensure your migrations appear in a single folder, and their paths match lexicographically with the order they should run in
  • If this isn't possible, the ordering can be customised using a new instance (previously, in the beta release for v3, this could be done with .extend(...) - see below for example using a new instance)

Upgrading from v2.x

The Umzug class should be imported as a named import, i.e. import { Umzug } from 'umzug'.

The MigrationMeta type, which is returned by umzug.executed() and umzug.pending(), no longer has a file property - it has a name and optional path - since migrations are not necessarily bound to files on the file system.

The migrations.glob parameter replaces path, pattern and traverseDirectories. It can be used, in combination with cwd and ignore to do much more flexible file lookups. See https://npmjs.com/package/glob for more information on the syntax.

The migrations.resolve parameter replaces customResolver. Explicit support for wrap and nameFormatter has been removed - these can be easily implemented in a resolve function.

The constructor option logging is replaced by logger to allow for warn and error messages in future. NodeJS's global console object can be passed to this. To disable logging, replace logging: false with logger: undefined.

Events have moved from the default nodejs EventEmitter to emittery. It has better design for async code, a less bloated API surface and strong types. But, it doesn't allow passing multiple arguments to callbacks, so listeners have to change slightly, as well as .addListener(...) and .removeListener(...) no longer being supported (.on(...) and .off(...) should now be used):

Before:

umzug.on('migrating', (name, m) => console.log({ name, path: m.path }))

After:

umzug.on('migrating', ev => console.log({ name: ev.name, path: ev.path }))

The Umzug#execute method is removed. Use Umzug#up or Umzug#down.

The options for Umguz#up and Umzug#down have changed:

  • umzug.up({ to: 'some-name' }) and umzug.down({ to: 'some-name' }) are still valid.
  • umzug.up({ from: '...' }) and umzug.down({ from: '...' }) are no longer supported. To run migrations out-of-order (which is not generally recommended), you can explicitly use umzug.up({ migrations: ['...'] }) and umzug.down({ migrations: ['...'] }).
  • name matches must be exact. umzug.up({ to: 'some-n' }) will no longer match a migration called some-name.
  • umzug.down({ to: 0 }) is still valid but umzug.up({ to: 0 }) is not.
  • umzug.up({ migrations: ['m1', 'm2'] }) is still valid but the shorthand umzug.up(['m1', 'm2']) has been removed.
  • umzug.down({ migrations: ['m1', 'm2'] }) is still valid but the shorthand umzug.down(['m1', 'm2']) has been removed.
  • umzug.up({ migrations: ['m1', 'already-run'] }) will throw an error, if already-run is not found in the list of pending migrations.
  • umzug.down({ migrations: ['m1', 'has-not-been-run'] }) will throw an error, if has-not-been-run is not found in the list of executed migrations.
  • umzug.up({ migrations: ['m1', 'm2'], rerun: 'ALLOW' }) will re-apply migrations m1 and m2 even if they've already been run.
  • umzug.up({ migrations: ['m1', 'm2'], rerun: 'SKIP' }) will skip migrations m1 and m2 if they've already been run.
  • umzug.down({ migrations: ['m1', 'm2'], rerun: 'ALLOW' }) will "revert" migrations m1 and m2 even if they've never been run.
  • umzug.down({ migrations: ['m1', 'm2'], rerun: 'SKIP' }) will skip reverting migrations m1 and m2 if they haven't been run or are already reverted.
  • umzug.up({ migrations: ['m1', 'does-not-exist', 'm2'] }) will throw an error if the migration name is not found. Note that the error will be thrown and no migrations run unless all migration names are found - whether or not rerun: 'ALLOW' is added.

The context parameter replaces params, and is passed in as a property to migration functions as an options object, alongs side name and path. This means the signature for migrations, which in v2 was (context) => Promise<void>, has changed slightly in v3, to ({ name, path, context }) => Promise<void>.

Handling existing v2-format migrations

The resolve function can also be used to upgrade your umzug version to v3 when you have existing v2-compatible migrations:

const { Umzug } = require('umzug');

const umzug = new Umzug({
  migrations: {
    glob: 'migrations/umzug-v2-format/*.js',
    resolve: ({name, path, context}) => {
      // Adjust the migration from the new signature to the v2 signature, making easier to upgrade to v3
      const migration = require(path)
      return { name, up: async () => migration.up(context), down: async () => migration.down(context) }
    }
  },
  context: sequelize.getQueryInterface(),
  logger: console,
});

Similarly, you no longer need migrationSorting, you can instantiate a new Umzug instance to manipulate migration lists directly:

const { Umzug } = require('umzug');

const parent = new Umzug({
  migrations: { glob: 'migrations/**/*.js' },
  context: sequelize.getQueryInterface(),
})

const umzug = new Umzug({
  ...parent.options,
  migrations: ctx => (await parent.migrations()).sort((a, b) => b.path.localeCompare(a.path))
})

Storages

Storages define where the migration data is stored.

JSON Storage

Using JSONStorage will create a JSON file which will contain an array with all the executed migrations. You can specify the path to the file. The default for that is umzug.json in the working directory of the process.

Detailed documentation for the options it can take are in the JSONStorageConstructorOptions TypeScript interface, which can be found in src/storage/json.ts.

Memory Storage

Using memoryStorage will store migrations with an in-memory array. This can be useful for proof-of-concepts or tests, since it doesn't interact with databases or filesystems.

It doesn't take any options, just import the memoryStorage function and call it to return a storage instance:

import { Umzug, memoryStorage } from 'umzug'

const umzug = new Umzug({
  migrations: ...,
  storage: memoryStorage(),
  logger: console,
})

Sequelize Storage

Using SequelizeStorage will create a table in your SQL database called SequelizeMeta containing an entry for each executed migration. You will have to pass a configured instance of Sequelize or an existing Sequelize model. Optionally you can specify the model name, table name, or column name. All major Sequelize versions are supported.

Detailed documentation for the options it can take are in the _SequelizeStorageConstructorOptions TypeScript interface, which can be found in src/storage/sequelize.ts.

This library has been tested with sequelize v6. It may or may not work with lower versions - use at your own risk.

MongoDB Storage

Using MongoDBStorage will create a collection in your MongoDB database called migrations containing an entry for each executed migration. You will have either to pass a MongoDB Driver Collection as collection property. Alternatively you can pass a established MongoDB Driver connection and a collection name.

Detailed documentation for the options it can take are in the MongoDBStorageConstructorOptions TypeScript interface, which can be found in src/storage/mongodb.ts.

Custom

In order to use a custom storage, you can pass your storage instance to Umzug constructor.

class CustomStorage {
  constructor(...) {...}
  logMigration(...) {...}
  unlogMigration(...) {...}
  executed(...) {...}
}

const umzug = new Umzug({ storage: new CustomStorage(...), logger: console })

Your instance must adhere to the UmzugStorage interface. If you're using TypeScript you can ensure this at compile time, and get IDE type hints by importing it:

import { UmzugStorage } from 'umzug'

class CustomStorage implements UmzugStorage {
  /* ... */
}

Events

Umzug is an emittery event emitter. Each of the following events will be called with migration parameters as its payload (with context, name, and nullable path properties). Events are a convenient place to implement application-specific logic that must run around each migration:

  • migrating - A migration is about to be executed.
  • migrated - A migration has successfully been executed.
  • reverting - A migration is about to be reverted.
  • reverted - A migration has successfully been reverted.

These events run at the beginning and end of up and down calls. They'll receive an object containing a context property:

  • beforeCommand - Before any command ('up' | 'down' | 'executed' | 'pending') is run.
  • afterCommand - After any command ('up' | 'down' | 'executed' | 'pending') is run. Note: this will always run, even if the command throws an error.

The FileLocker class uses beforeAll and afterAll to implement a simple filesystem-based locking mechanism.

All events are type-safe, so IDEs will prevent typos and supply strong types for the event payloads.

Errors

When a migration throws an error, it will be wrapped in a MigrationError which captures the migration metadata (name, path etc.) as well as the original error message, and will be rethrown. In most cases, this is expected behaviour, and doesn't require any special handling beyond standard error logging setups.

If you expect failures and want to try to recover from them, you will need to try-catch the call to umzug.up(). You can access the original error from the .cause property if necessary:

try {
  await umzug.up();
} catch (e) {
  if (e instanceof MigrationError) {
    const original = e.cause;
    // do something with the original error here
  }
  throw e;
}

Under the hood, verror is used to wrap errors.

CLI

🚧🚧🚧 The CLI is new to Umzug v3. Feedback on it is welcome in discussions 🚧🚧🚧

Umzug instances provide a .runAsCLI() method. When called, this method will automatically cause your program to become a complete CLI, with help text and such:

// migrator.js
const { Umzug } = require('umzug')

const umzug = new Umzug({ ... })

exports.umzug = umzug

if (require.main === module) {
  umzug.runAsCLI()
}

CLI Usage

A script like the one above is now a runnable CLI program. You can run node migrator.js --help to see how to use it. It will print something like:

usage: <script> [-h] <command> ...

Umzug migrator

Positional arguments:
  <command>
    up        Applies pending migrations
    down      Revert migrations
    pending   Lists pending migrations
    executed  Lists executed migrations
    create    Create a migration file

Optional arguments:
  -h, --help  Show this help message and exit.

For detailed help about a specific command, use: <script> <command> -h

Running migrations

node migrator up and node migrator down apply and revert migrations respectively. They're the equivalent of the .up() and .down() methods.

Use node migrator up --help and node migrator down --help for options (running "to" a specific migration, passing migration names to be run explicitly, and specifying the rerun behavior):

Up:

usage: <script> up [-h] [--to NAME] [--step COUNT] [--name MIGRATION]
                   [--rerun {THROW,SKIP,ALLOW}]
                   

Performs all migrations. See --help for more options

Optional arguments:
  -h, --help            Show this help message and exit.
  --to NAME             All migrations up to and including this one should be 
                        applied.
  --step COUNT          Run this many migrations. If not specified, all will 
                        be applied.
  --name MIGRATION      Explicity declare migration name(s) to be applied.
  --rerun {THROW,SKIP,ALLOW}
                        Specify what action should be taken when a migration 
                        that has already been applied is passed to --name. 
                        The default value is "THROW".

Down:

usage: <script> down [-h] [--to NAME] [--step COUNT] [--name MIGRATION]
                     [--rerun {THROW,SKIP,ALLOW}]
                     

Undoes previously-applied migrations. By default, undoes the most recent 
migration only. Use --help for more options. Useful in development to start 
from a clean slate. Use with care in production!

Optional arguments:
  -h, --help            Show this help message and exit.
  --to NAME             All migrations up to and including this one should be 
                        reverted. Pass "0" to revert all.
  --step COUNT          Run this many migrations. If not specified, one will 
                        be reverted.
  --name MIGRATION      Explicity declare migration name(s) to be reverted.
  --rerun {THROW,SKIP,ALLOW}
                        Specify what action should be taken when a migration 
                        that has already been reverted is passed to --name. 
                        The default value is "THROW".

Listing migrations

node migrator pending # list migrations yet to be run
node migrator executed # list migrations that have already run

node migrator pending --json # list pending migrations including names and paths, in a json array format
node migrator executed --json # list executed migrations including names and paths, in a json array format

node migrator pending --help # show help/options
node migrator executed --help # show help/options
usage: <script> pending [-h] [--json]

Prints migrations returned by `umzug.pending()`. By default, prints migration 
names one per line.

Optional arguments:
  -h, --help  Show this help message and exit.
  --json      Print pending migrations in a json format including names and 
              paths. This allows piping output to tools like jq. Without this 
              flag, the migration names will be printed one per line.

 

usage: <script> executed [-h] [--json]

Prints migrations returned by `umzug.executed()`. By default, prints 
migration names one per line.

Optional arguments:
  -h, --help  Show this help message and exit.
  --json      Print executed migrations in a json format including names and 
              paths. This allows piping output to tools like jq. Without this 
              flag, the migration names will be printed one per line.

Creating migrations - CLI

Usually, migrations correspond to files on the filesystem. The CLI exposes a way to create migration files easily:

node migrator create --name my-migration.js

This will create a file with a name like 2000.12.25T12.34.56.my-migration.js in the same directory as the most recent migration file. If it's the very first migration file, you need to specify the folder explicitly:

node migrator create --name my-migration.js --folder path/to/directory

The timestamp prefix can be customized to be date-only or omitted, but be aware that it's strongly recommended to ensure your migrations are lexicographically sortable so it's easy for humans and tools to determine what order they should run in - so the default prefix is recommended.

This will generate a migration file called <<timestamp>>.my-migration.js with the default migration template for .js files that ships with Umzug.

Umzug also ships with default templates for .ts, .cjs, .mjs and .sql files. Umzug will choose the template based on the extension you provide in name.

You can specify a custom template for your project when constructing an umzug instance via the template option. It should be a function which receives a filepath string, and returns an array of [filepath, content] pairs. Usually, just one pair is needed, but a second could be used to include a "down" migration in a separate file:

const umzug = new Umzug({
  migrations: ...,
    create: {
        template: filepath => [
            [filepath, fs.readFileSync('path/to/your/template/file').toString()],
        ]
    }
})

The create command includes some safety checks to make sure migrations aren't created with ambiguous ordering, and that they will be picked up by umzug when applying migrations.

Use node migrator create --help for more options:

usage: <script> create [-h] --name NAME [--prefix {TIMESTAMP,DATE,NONE}]
                       [--folder PATH] [--allow-extension EXTENSION]
                       [--skip-verify] [--allow-confusing-ordering]
                       

Generates a placeholder migration file using a timestamp as a prefix. By 
default, mimics the last existing migration, or guesses where to generate the 
file if no migration exists yet.

Optional arguments:
  -h, --help            Show this help message and exit.
  --name NAME           The name of the migration file. e.g. my-migration.js, 
                        my-migration.ts or my-migration.sql. Note - a prefix 
                        will be added to this name, usually based on a 
                        timestamp. See --prefix
  --prefix {TIMESTAMP,DATE,NONE}
                        The prefix format for generated files. TIMESTAMP uses 
                        a second-resolution timestamp, DATE uses a 
                        day-resolution timestamp, and NONE removes the prefix 
                        completely. The default value is "TIMESTAMP".
  --folder PATH         Path on the filesystem where the file should be 
                        created. The new migration will be created as a 
                        sibling of the last existing one if this is omitted.
  --allow-extension EXTENSION
                        Allowable extension for created files. By default .js,
                         .ts and .sql files can be created. To create txt 
                        file migrations, for example, you could use '--name 
                        my-migration.txt --allow-extension .txt' This 
                        parameter may alternatively be specified via the 
                        UMZUG_ALLOW_EXTENSION environment variable.
  --skip-verify         By default, the generated file will be checked after 
                        creation to make sure it is detected as a pending 
                        migration. This catches problems like creation in the 
                        wrong folder, or invalid naming conventions. This 
                        flag bypasses that verification step.
  --allow-confusing-ordering
                        By default, an error will be thrown if you try to 
                        create a migration that will run before a migration 
                        that already exists. This catches errors which can 
                        cause problems if you change file naming conventions. 
                        If you use a custom ordering system, you can disable 
                        this behavior, but it's strongly recommended that you 
                        don't! If you're unsure, just ignore this option.

Creating migrations - API

Umzug includes an optional helper for generating migration files. It's often most convenient to create files using the CLI helper, but the equivalent API also exists on an umzug instance:

await umzug.create({ name: 'my-new-migration.js' })

Download Details:

Author: Sequelize
Source Code: https://github.com/sequelize/umzug 
License: MIT license

#typescript #javascript #migrate #sequelize 

Umzug: Framework Agnostic Migration tool for Node.js
Sheldon  Grant

Sheldon Grant

1672070174

Update table Structure using Migration in CodeIgniter 4

Migration makes table creation and managing them easier. Using this you can recreate tables or update a table without losing its data.

In this tutorial, I show how you can update table structure using migration in CodeIgniter 4.

Contents

  1. Database Configuration
  2. Create Table
  3. Update Table Structure Using migrate:refresh
  4. Update Table Structure without losing data
  5. Conclusion

1. Database configuration

  • Open .env file which is available at the project root.

NOTE – If dot (.) not added at the start then rename the file to .env.

  • Remove # from start of database.default.hostname, database.default.database, database.default.username, database.default.password, and database.default.DBDriver.
  • Update the configuration and save it.
database.default.hostname = 127.0.0.1
database.default.database = testdb
database.default.username = root
database.default.password = 
database.default.DBDriver = MySQLi

2. Create Table

  • Create a table employees using migration.
php spark migrate:create create_employees_table
  • Now, navigate to app/Database/Migrations/ folder from the project root.
  • Find a PHP file that ends with CreateEmployeesTable and open it.
  • Define the employees table structure in the up() method.
  • Using the down() method delete employees table that calls when undoing migration.
<?php

namespace App\Database\Migrations;

use CodeIgniter\Database\Migration;

class CreateEmployeesTable extends Migration
{
      public function up(){
           $this->forge->addField([
                'id' => [
                     'type' => 'INT',
                     'constraint' => 5,
                     'unsigned' => true,
                     'auto_increment' => true,
                ],
                'emp_name' => [
                     'type' => 'VARCHAR',
                     'constraint' => '100',
                ],
                'email' => [
                     'type' => 'VARCHAR',
                     'constraint' => '100',
                ],
                'city' => [
                     'type' => 'VARCHAR',
                     'constraint' => '100',
                ],
           ]);
           $this->forge->addKey('id', true);
           $this->forge->createTable('employees');
      }

      public function down(){
           $this->forge->dropTable('employees');
      }
}
  • Run the migration –
php spark migrate

I added some records to the table.

Employees Table with records


3. Update Table Structure Using migrate:refresh

Using migrate:refresh you can recreate tables.


Steps –

  • Again open CreateEmployeesTable migration PHP file in app/Database/Migrations/ folder.
  • Modify table structure in the up() method –
public function up(){
      $this->forge->addField([
          'id' => [
                'type' => 'INT',
                'constraint' => 5,
                'unsigned' => true,
                'auto_increment' => true,
          ],
          'fullname' => [
                'type' => 'VARCHAR',
                'constraint' => '191',
          ],
          'email' => [
                'type' => 'VARCHAR',
                'constraint' => '100',
          ],
          'city' => [
                'type' => 'VARCHAR',
                'constraint' => '100',
          ],
          'age' => [
                'type' => 'INT',
                'constraint' => '3',
          ],
      ]);
      $this->forge->addKey('id', true);
      $this->forge->createTable('employees');
}
  • Here, I did the following –
    • Change column name from emp_name to fullname and change constraint value from 100 to 191.
    • Added a new column age.
  • Refresh the migration –
php spark migrate:refresh

NOTE – Above command will recreate the whole database and delete its data.

Output of after migrate refresh execution in CodeIgniter 4


4. Update Table Structure without losing data

To do this create a new migration –

php spark make:migration update_and_addfield_to_employees_table
  • Open a PHP file that ends with UpdateAndAddfieldToEmployeesTable.php in app/Database/Migrations/ folder.
  • Define table modification in up() method –
    • Rename column name from emp_name to fullname.
    • Add a new column age.
  • Reset table structure using down() method –
    • Rename column name from fullname to emp_name.
    • Delete age column.

NOTE – If you have created more than 1 column while table altering then in the down() method mention the column names between [] separated by comma in dropColumn().

<?php

namespace App\Database\Migrations;

use CodeIgniter\Database\Migration;

class UpdateAndAddfieldToEmployeesTable extends Migration
{

     public function up(){

         ## Rename column name from emp_name to fullname 
         $alterfields = [
              'emp_name' => [
                    'name' => 'fullname',
                    'type' => 'VARCHAR',
                    'constraint' => '100',
              ],
         ];
         $this->forge->modifyColumn('employees', $alterfields);

         ## Add age column
         $addfields = [
              'age' => [
                    'type' => 'INT',
                    'constraint' => '3',
              ],
         ];
         $this->forge->addColumn('employees', $addfields);
     }

     public function down(){
         
         ## Delete 'age' column
         $this->forge->dropColumn('employees', ['age']);

         ## Rename column name from fullname to emp_name
         $fields = [
              'fullname' => [
                    'name' => 'emp_name',
                    'type' => 'VARCHAR',
                    'constraint' => '100',
              ],
         ];
         $this->forge->modifyColumn('employees', $fields);
     }
}
  • Run the migration –
php spark migrate

After execution employees table structure is changed and data is not deleted.

Output of Update Table structure using new migration


5. Conclusion

Use migrate:refresh only when you want to recreate all tables using migration otherwise, create a new migration file for updating the existing table.

You can also alter more than 1 table using a single migration file.

If you found this tutorial helpful then don't forget to share.

Original article source at: https://makitweb.com/

#codeigniter #migrate #structure 

Update table Structure using Migration in CodeIgniter 4
Gordon  Matlala

Gordon Matlala

1671724140

Migrate Your Code From PHP 7.4 to 8.1

With the recent end-of-life for PHP 7.4, it's time to migrate your code. Here are a few options to do that.

The end-of-life (EOL) for PHP 7.4 was Monday, November 28, 2022. If you’re like me, that date snuck up much faster than anticipated. While your PHP 7.4 code isn’t going to immediately stop working, you do need to begin making plans for the future of this codebase.

What are your options?

You could continue to remain on PHP 7.4, but there are several benefits to updating. The biggest are security risk and support. As we move farther and farther away from the EOL date, attackers will turn their focus to PHP 7.4 knowing that any vulnerabilities they discover will go unpatched in the majority of systems. Staying on PHP 7.4 drastically increases the risk of your site being compromised in the future. In a similar vein, finding support for issues you encounter with PHP 7.4 will become increasingly more difficult. In addition, you will most likely begin to encounter compatibility issues with third-party code/packages as they update their code to be compatible with later versions and drop support for 7.4. You’ll also be missing out on significant speed and performance improvements introduced in 8.0 and further improved in 8.1. But upgrading all that legacy code is daunting!

Where to start?

Luckily, PHP provides an official migration guide from PHP 7.4 to 8.0 to get you started (and an 8.0 to 8.1 migration guide as well). Be sure to read through the Backward Incompatible Changes and Deprecated Features sections. While these guides are incredibly handy, you may very well have tens of thousands of lines of code to check, some of which you may have inherited. Luckily there are some options to help pinpoint potential problem areas in the migration.

PHPCodeSniffer + PHPCompatibility sniffs

PHPCodeSniffer (PCS) is a package for syntax checking of PHP Code. It checks your code against a collection of defined rules (aka “sniffs”) referred to as “standards”. PHPCodeSniffer ships with a collection of standards you can use including PEAR, PSR1, PSR2, PSR12, Squiz, and Zend. Luckily, you can write your own collection of sniffs to define any set of rules you like.

PHPCompability has entered the chat

PHPCompatibility “is a set of sniffs for PHP CodeSniffer that checks for PHP cross-version compatibility” allowing you to test your codebase for compatibility with different versions of PHP, including PHP 8.0 and 8.1. This means you can use PHPCodeSniffer to scan your codebase, applying the rules from PHPCompability to sniff out any incompatibilities with PHP 8.1 that might be present.

Before I continue…

While PHP8.2 was released on December 8, 2022, and I encourage you to begin looking over the official 8.1 to 8.2 migration guide and begin making plans to upgrade, most of the checkers I mention in this article have not completed full support for 8.2 at this time. For those reasons, I’ll be focusing on migrating the code to PHP8.1, and not 8.2.

In the process of writing this article, I discovered PHPCompatiblity has a known issue when checking for compatibility with PHP 8.0/8.1 where it will report issues that should be Errors as Warnings. The only workaround for now is to use the develop branch for PHPCompatibility instead of master. While they state it is stable, please be aware that in this article, I’m using the non-stable branch. You may want to weigh the pros and cons of using the develop branch before implementing it anywhere else than in a local development environment. While I found PCS+PHPCompatibility to be the most straightforward and comprehensive solution for checking for incompatible code, if you do not want to use a non-stable version of PCS, see the section at the end of the article about alternative options.

For the purposes of this article, I’ll be using the 1.4.6 version of SimpleSAMLphp to test for incompatibilities. This is a six-year-old version of the code base. I do this not to pick on SimpleSAMLphp, but because I wanted something that would definitely have some errors. As it turns out, all of the platform.sh code I tested, as well as my own code was already compatible with PHP8.1 and required no changes.

Get started

To get started, first clone your codebase, and then create a new branch. You’ll now need to decide if you want to install the dependencies and run the scans on your local machine or in a local development environment using something like DDEV, Lando, or Docksal. In this demo, I’m using DDEV. I suggest using a local development environment vs running directly on your local machine because while it’s not required to use the version of PHP you want to test against, for the best results, it is recommended you do so. If you don’t have PHP installed, or don’t have the target version installed, a local development environment allows you to create an ephemeral environment with exactly what you need without changing your machine.

After setting up your environment for PHP 8.1, at a terminal prompt (in my case, I’ve run ddev start and once the containers are available, shell into the web app using ddev ssh), you need to add these new packages so you use them to test with. I’ll be adding them with composer, however, there are multiple ways to install them if you would prefer to do so differently. If your codebase isn’t already using composer, you’ll need to do composer init before continuing.

Because you'll be using the develop branch of PHPCompatibility there are a couple of extra steps to do that aren’t in the regular installation instructions. First is that the develop branch of PHPCompatibility requires an alpha version of phpcsstandards/phpcsutils. Because it is marked as alpha, you'll need to let composer know this one package is OK to install even though it is below your minimum stability requirements.

$ composer require --dev phpcsstandards/phpcsutils:"^1.0@dev"

Next, install PHPCompatibility targeting the develop branch

$ composer require --dev phpcompatibility/php-compatibility:dev-develop

The develop branch also installs dealerdirect/phpcodesniffer-composer-installer so you don’t need to add it manually or direct PCS to this new standard.

To verify our new standards are installed, you'll have PCS display the standards it is aware of.

$ phpcs -i
The installed coding standards are MySource, PEAR, PSR1, PSR2, PSR12, Squiz, Zend, PHPCompatibility, PHPCS23Utils and PHPCSUtils

Now that you know your standards are available, you can have PCS scan our code. To instruct PCS to use a specific standard, use the --standard option and tell it to use PHPCompatibility. However, you also need to tell PHPCompatibility which PHP version you want to test against. For that, use PCS’ --runtime-set option and pass it the key testVersion and value of 8.1.

Before you start the scan, the one issue remaining is that code you want to scan is in the root of the project (.) but the vendor directly is also in the project root. You don’t want the code in vendor scanned, as those aren’t packages you necessarily control. PCS allows you to tell it to not scan files/directories with the --ignore option. Finally, you want to see the progress as PCS parses the file so you'll pass in the -p option.

Putting it all together:

$ phpcs -p . --standard=PHPCompatibility --runtime-set testVersion 8.1 --ignore=*/vendor/*

This kicks off PCS which will output its progress as it scans through your project’s code. W indicates Warnings, and E indicates Errors. At the end of the scan it will output: a full report with the file containing the issue, the line number where the issue occurs, whether the issue is a Warning or an Error, and the specific issue discovered.

In general, Errors are things that will cause a fatal error in PHP 8.1 and will need to be fixed before you can migrate. Warnings can be things that have been deprecated in 8.0/8.1 but not yet removed or issues that PCS ran into while trying to parse the file.

asciicast

Given that the report might be long, and is output all at once into your terminal, there are numerous options for changing the information that is included in the report, as well as multiple reporting formats.

As you begin to fix your code, you can rerun the report as many times as needed. However, at some point, you’ll need to test the code on an actual PHP8.1 environment with real data. If you’re using Platform.sh, which is as easy as creating a branch, changing a single line in your configuration file, and pushing that branch to us. You can check out this video to see how easy it is!

There’s too much to fix!

Now that you have a solid idea of what needs to be updated before you can migrate, you might be facing an incredible amount of work ahead of you. Luckily, you have some options to help you out. PCS ships with a code fixer called PHP Code Beautifier and Fixer (phpcbf). Running phpcbf is almost identical to running phpcs and most of the options are identical. The other option is Rector. Usage of these tools is beyond the scope of this article, but as with any automation, you’ll want to test and verify before promoting changes to production.

Alternative options

If for any reason you don’t feel comfortable using a non-stable version of PCS, you do have other options for checking your code.

Phan

Phan is a static code analyzer for PHP. It offers multiple levels of analysis and allows for incrementally strengthening that analysis.

“Static analysis needs to be introduced slowly if you want to avoid your team losing their minds.”

Phan doesn’t target just compatibility with newer versions, it can highlight areas of code that will error in later versions. However, there are some caveats when using Phan for checking compatibility:

  • Slower than PCS+PHPCompatibility.
  • Phan requires the ast php extension which is not available by default on Platform.sh (or in DDEV). You’ll need to install it in your local development environment and add it to your php.ini file. Alternatively, you can use the --allow-polyfill-parser option, but it is considerably slower.
  • Phan’s default reporting output isn’t as easy to read as other options
  • I came across an issue where if your code base sets a different vendor directory via composer’s [config:vendor-dir](https://getcomposer.org/doc/06-config.md#vendor-dir) option, it will error out stating it can’t find certain files in the vendor directory
  • As mentioned, Phan analyzes much more than just PHP8.1 compatibility. While certainly a strength in other situations, if your goal is to migrate from 7.4 to 8.1 as quickly as possible, you will have to parse through errors that are unrelated to version compatibility.
  • Requires you run it on the PHP version you want to target

PHPStan

Similar to Phan, PHPStan is a static code analyzer for PHP that promises to “find bugs without writing tests.” And a similar set of caveats apply:

  • Slower than either PCS or Phan
  • Analyzes much more than just PHP8.1 compatibility so depending on your current codebase, you will have to possibly parse through a bunch of errors that are unrelated to version compatibility
  • Requires you run it on the PHP version you want to target

PHP Parallel Lint

A very fast PHP linter that can lint your codebase for issues, but can also check for deprecations. While it is exceptionally fast, it is only a linter, and therefore can only surface deprecations that are thrown at compile time, not at runtime. In my example code, it only found 2 deprecations vs the 960 deprecations PCS uncovered.

Summary

Code migrations, while never fun, are crucial to minimizing organizational risk. Platform.sh gives you the flexibility to test your code using the same data and configurations as your production site, but in a siloed environment. Combine this with the tools above, and you have everything you need for a strong, efficient code migration.

Original article source at: https://opensource.com

#php #migrate 

Migrate Your Code From PHP 7.4 to 8.1
Bongani  Ngema

Bongani Ngema

1669995563

Add Foreign Key in Migration – CodeIgniter 4

In a database, a foreign key is a field that references another table.

They keep track of related records and which table they exist in. They also let you know what record they relate to, which means updating them is simple and quick.

In this tutorial, I show how you can add a foreign key while creating table using migration in CodeIgniter 4.

Contents

  1. Database Configuration
  2. Create Tables and add Foreign key
  3. Create Models
  4. Conclusion

1. Database configuration

  • Open .env file which is available at the project root.

NOTE – If dot (.) not added at the start then rename the file to .env.

  • Remove # from start of database.default.hostname, database.default.database, database.default.username, database.default.password, and database.default.DBDriver.
  • Update the configuration and save it.
database.default.hostname = 127.0.0.1
database.default.database = testdb
database.default.username = root
database.default.password = 
database.default.DBDriver = MySQLi

2. Create Tables and add Foreign key

I am creating 2 tables –

  • departments
  • employees

Adding foreign key depart_id field on the employees table.


departments

  • Create departments table –
php spark migrate:create create_departments_table
  • Navigate to app/Database/Migrations/ folder.
  • Find PHP file that ends with CreateDepartmentsTable.php and open it.
  • In the up() method define table structure.
  • Using the down() method delete departments table that calls when undoing migration.
<?php

namespace App\Database\Migrations;

use CodeIgniter\Database\Migration;

class CreateDepartmentsTable extends Migration
{
     public function up() {
          $this->forge->addField([
             'id' => [
                  'type' => 'INT',
                  'constraint' => 5,
                  'unsigned' => true,
                  'auto_increment' => true,
             ],
             'name' => [
                  'type' => 'VARCHAR',
                  'constraint' => '100',
             ]
          ]);
          $this->forge->addKey('id', true);
          $this->forge->createTable('departments');
     }

     public function down() {
         $this->forge->dropTable('departments');
     }
}

employees

  • Create employees table –
php spark migrate:create create_employees_table
  • Navigate to app/Database/Migrations/ folder and open PHP file that ends with CreateEmployeesTable.php.
  • In this table creating id, depart_id, and name fields.

Add foreign key –

  • Here, using depart_id field to define foreign key.
  • Call $this->forge->addForeignKey() method to set foreign key.
  • In the method pass 5 parameters –
    1. depart_id – Foreign key field name.
    2. departments – Parent table name.
    3. id – Primary key or unique field name of the parent table that needs to link.
    4. CASCADE – Delete matching records when the delete query executes in the parent table.
    5. CASCADE – Update matching records when the update query executes in the parent table.
$this->forge->addForeignKey('depart_id', 'departments', 'id', 'CASCADE', 'CASCADE');
  • Specify employees table in the down() method that calls when undoing migration.
<?php

namespace App\Database\Migrations;

use CodeIgniter\Database\Migration;

class CreateEmployeesTable extends Migration
{
    public function up() {
       $this->forge->addField([
           'id' => [
               'type' => 'INT',
               'constraint' => 5,
               'unsigned' => true,
               'auto_increment' => true,
           ],
           'depart_id' => [
               'type' => 'INT',
               'constraint' => 5,
               'unsigned' => true,
           ],
           'name' => [
               'type' => 'VARCHAR',
               'constraint' => '100',
           ]
       ]);

       $this->forge->addKey('id', true);
       $this->forge->addForeignKey('depart_id', 'departments', 'id', 'CASCADE', 'CASCADE');
       $this->forge->createTable('employees');

    }

    public function down() {
       $this->forge->dropTable('employees');
    }
}

Run the migration –

php spark migrate

3. Create Models

Create 2 models –

  • Departments
  • Employees

Departments

  • Create Departments Model –
php spark make:model Departments
  • Open app/Models/Departments.php file.
  • In $allowedFields Array specify field names – ['name'] that can be set during insert and update.

Completed Code

<?php

namespace App\Models;

use CodeIgniter\Model;

class Departments extends Model
{
    protected $DBGroup = 'default';
    protected $table = 'departments';
    protected $primaryKey = 'id';
    protected $useAutoIncrement = true;
    protected $insertID = 0;
    protected $returnType = 'array';
    protected $useSoftDeletes = false;
    protected $protectFields = true;
    protected $allowedFields = ['name'];

    // Dates
    protected $useTimestamps = false;
    protected $dateFormat = 'datetime';
    protected $createdField = 'created_at';
    protected $updatedField = 'updated_at';
    protected $deletedField = 'deleted_at';

    // Validation
    protected $validationRules = [];
    protected $validationMessages = [];
    protected $skipValidation = false;
    protected $cleanValidationRules = true;

    // Callbacks
    protected $allowCallbacks = true;
    protected $beforeInsert = [];
    protected $afterInsert = [];
    protected $beforeUpdate = [];
    protected $afterUpdate = [];
    protected $beforeFind = [];
    protected $afterFind = [];
    protected $beforeDelete = [];
    protected $afterDelete = [];
}

Employees

  • Create Employees Model –
php spark make:model Employees
  • Open app/Models/Employees.php file.
  • In $allowedFields Array specify field names – ['depart_id','name'] that can be set during insert and update.

Completed Code

<?php

namespace App\Models;

use CodeIgniter\Model;

class Employees extends Model
{
     protected $DBGroup = 'default';
     protected $table = 'employees';
     protected $primaryKey = 'id';
     protected $useAutoIncrement = true;
     protected $insertID = 0;
     protected $returnType = 'array';
     protected $useSoftDeletes = false;
     protected $protectFields = true;
     protected $allowedFields = ['depart_id','name'];

     // Dates
     protected $useTimestamps = false;
     protected $dateFormat = 'datetime';
     protected $createdField = 'created_at';
     protected $updatedField = 'updated_at';
     protected $deletedField = 'deleted_at';

     // Validation
     protected $validationRules = [];
     protected $validationMessages = [];
     protected $skipValidation = false;
     protected $cleanValidationRules = true;

     // Callbacks
     protected $allowCallbacks = true;
     protected $beforeInsert = [];
     protected $afterInsert = [];
     protected $beforeUpdate = [];
     protected $afterUpdate = [];
     protected $beforeFind = [];
     protected $afterFind = [];
     protected $beforeDelete = [];
     protected $afterDelete = [];
}

4. Conclusion

If you don’t want to make any changes on the child table when the delete/update action is performed on the parent table then remove CASCADE while defining the foreign key.

If you found this tutorial helpful then don't forget to share.

Original article source at: https://makitweb.com/

#codeigniter #migrate #key 

Add Foreign Key in Migration – CodeIgniter 4
Gordon  Matlala

Gordon Matlala

1668508860

How to Migrate To A Custom User Model Mid-project in Django

This article looks at how to migrate to a custom user model mid-project in Django.

Custom User Model

Django's default user model comes with a relatively small number of fields. Because these fields are not sufficient for all use cases, a lot of Django projects switch to custom user models.

Switching to a custom user model is easy before you migrate your database, but gets significantly more difficult after that since it affects foreign keys, many-to-many relationships, and migrations, to name a few.

To avoid going through this cumbersome migration process, Django's official documentation highly recommends you set up a custom user model at the start of the project even if the default one is sufficient.

Up to this day, there's still no official way of migrating to a custom user model mid-project. The Django community is still discussing what the best way to migrate is in the following ticket.

In this article, we'll look at a relatively easy approach to migrating to a custom user model mid-project. The migration process we're going to use isn't as destructive as some of the other ones found on the internet and won't require any raw SQL executions or modifying migrations by hand.

For more on creating a custom user model at the start of a project, check out the Creating a Custom User Model in Django article.

Dummy Project

Migrating to a custom user model mid-project is a potentially destructive action. Because of that, I've prepared a dummy project you can use to test the migration process before moving on to your actual codebase.

If you want to work with your own codebase feel free to skip this section.

The dummy project we're going to be working with is called django-custom-user. It's a simple todo app that leverages the user model.

Clone it down:

$ git clone --single-branch --branch base git@github.com:duplxey/django-custom-user.git
$ cd django-custom-user

Create a new virtual environment and activate it:

$ python3 -m venv venv && source venv/bin/activate

Install the requirements:

(venv)$ pip install -r requirements.txt

Spin up a Postgres Docker container:

$ docker run --name django-todo-postgres -p 5432:5432 \
    -e POSTGRES_USER=django-todo -e POSTGRES_PASSWORD=complexpassword123 \
    -e POSTGRES_DB=django-todo -d postgres

Alternatively, you can install and run Postgres outside of Docker if that's your preference. Just make sure to go to core/settings.py and change the DATABASES credentials accordingly.

Migrate the database:

(venv)$ python manage.py migrate

Load the fixtures:

(venv)$ python manage.py loaddata fixtures/auth.json --app auth
(venv)$ python manage.py loaddata fixtures/todo.json --app todo

These two fixtures added a few users, groups, and tasks to the database and created a superuser with the following credentials:

username:  admin
password:  password

Next, run the server:

(venv)$ python manage.py runserver

Lastly, navigate to the admin panel at http://localhost:8000/admin, log in as the superuser, and make sure that the data has been loaded successfully.

Migration Process

The migration process we're going to use assumes that:

  1. Your project doesn't have a custom user model yet.
  2. You've already created your database and migrated it.
  3. There are no pending migrations and all the existing migrations have been applied.
  4. You don't want to lose any data.

If you're still in the development phase and the data in your database isn't important, you don't have to follow these steps. To migrate to a custom user model, you can simply wipe the database, delete all the migration files, and then follow the steps here.

Before following along, please fully back up your database (and codebase). You should also try the steps on a staging branch/environment before moving to production.

Migration Steps

  1. Point AUTH_USER_MODEL to the default Django user in settings.py.
  2. Replace all User references with AUTH_USER_MODEL or get_user_model() accordingly.
  3. Start a new Django app and register it in settings.py.
  4. Create an empty migration within the newly created app.
  5. Migrate the database, so the empty migration gets applied.
  6. Delete the empty migration file.
  7. Create a custom user model in the newly created app.
  8. Point DJANGO_USER_MODEL to the custom user.
  9. Run makemigrations.

Let's begin!

Step 1

To migrate to a custom user model we first need to get rid of all the direct User references. To do that, start by adding a new property named AUTH_USER_MODEL in settings.py like so:

# core/settings.py

AUTH_USER_MODEL = 'auth.User'

This property tells Django what user model to use. Since we don't have a custom user model yet we'll point it to the default Django user model.

Step 2

Next, go through your entire codebase and make sure to replace all User references with AUTH_USER_MODEL or get_user_model() accordingly:

# todo/models.py

class UserTask(GenericTask):
    user = models.ForeignKey(
        to=AUTH_USER_MODEL,
        on_delete=models.CASCADE
    )

    def __str__(self):
        return f'UserTask {self.id}'


class GroupTask(GenericTask):
    users = models.ManyToManyField(
        to=AUTH_USER_MODEL
    )

    def __str__(self):
        return f'GroupTask {self.id}'

Don't forget to import AUTH_USER_MODEL at the top of the file:

from core.settings import AUTH_USER_MODEL

Also make sure that all the third-party apps/packages you use do the same. If any of them reference the User model directly, things might break. You don't have to worry about this much since most of the popular packages that leverage the User model don't reference it directly.

Step 3

Moving on, we need to start a new Django app, which will host the custom user model.

I'll call it users but you can pick a different name:

(venv)$ python manage.py startapp users

If you want, you can reuse an already existing app, but you need to make sure that there are no migrations within that app yet; otherwise, the migration process won't work due to Django's limitations.

Register the app in settings.py:

# core/settings.py

INSTALLED_APPS = [
    'django.contrib.admin',
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.messages',
    'django.contrib.staticfiles',
    'todo.apps.TodoConfig',
    'users.apps.UsersConfig',  # new
]

Step 4

Next, we need to trick Django into thinking that the users app is in charge of the auth_user table. This can usually be done with the migrate command and the --fake flag, but not in this case because we'll run into InconsistentMigrationHistory since most migrations depend on auth migrations.

Anyways, to bypass this, we can use a hacky workaround. First, we'll create an empty migration, apply it so it gets saved to django_migrations table, and then swap it with the actual auth_user migration.

Create an empty migration within the users app:

(venv)$ python manage.py makemigrations --empty users

Migrations for 'users':
  users\migrations\0001_initial.py

This should create an empty migration named users/migrations/0001_initial.py.

Step 5

Migrate the database so the empty migration gets added to the django_migrations table:

(venv)$ python manage.py migrate

Operations to perform:
  Apply all migrations: admin, auth, contenttypes, sessions, todo, users
Running migrations:
  Applying users.0001_initial... OK

Step 6

Now, delete the empty migration file:

(venv)$ rm users/migrations/0001_initial.py

Step 7

Go to users/models.py and define the custom User model like so:

# users/models.py

from django.contrib.auth.models import AbstractUser


class User(AbstractUser):
    class Meta:
        db_table = 'auth_user'

Do not add any custom fields yet. This model has to be the direct replica of Django's default user model since we'll use it to create the initial auth_user table migration.

Also, make sure to name it User, otherwise you might run into problems because of content types. You'll be able to change the model's name later.

Step 8

Navigate to your settings.py and point AUTH_USER_MODEL to the just created custom user model:

# core/settings.py

AUTH_USER_MODEL = 'users.User'

If your app is not called users make sure to change it.

Step 9

Run makemigrations to generate the initial auth_user migration:

(venv)$ python manage.py makemigrations

Migrations for 'users':
  users\migrations\0001_initial.py
    - Create model User

And that's it! The generated migration has already been applied when you first ran migrate by Django's auth app, so running migrate again won't do anything.

Add New Fields

Once you've got a custom user model set up, it's easy to add new fields.

To add a phone and address field, for example, add the following to the custom user model:

# users/models.py

class User(AbstractUser):
    phone = models.CharField(max_length=32, blank=True, null=True)    # new
    address = models.CharField(max_length=64, blank=True, null=True)  # new

    class Meta:
        db_table = 'auth_user'

Don't forget to import models at the top of the file:

from django.db import models

Next, make migrations and migrate:

(venv)$ python manage.py makemigrations
(venv)$ python manage.py migrate

To make sure the fields have been reflected in the database bash into the Docker container:

$ docker exec -it django-todo-postgres bash

Connect to the database via psql:

root@967e9158a787:/# psql -U django-todo

psql (14.5 (Debian 14.5-1.pgdg110+1))
Type "help" for help.

And inspect the auth_user table:

django-todo=# \d+ auth_user

                                                                Table "public.auth_user"
    Column    |           Type           | Collation | Nullable |             Default              | Storage  | Compression | Stats target | Description
--------------+--------------------------+-----------+----------+----------------------------------+----------+-------------+--------------+-------------
 id           | integer                  |           | not null | generated by default as identity | plain    |             |              |
 password     | character varying(128)   |           | not null |                                  | extended |             |              |
 last_login   | timestamp with time zone |           |          |                                  | plain    |             |              |
 is_superuser | boolean                  |           | not null |                                  | plain    |             |              |
 username     | character varying(150)   |           | not null |                                  | extended |             |              |
 first_name   | character varying(150)   |           | not null |                                  | extended |             |              |
 last_name    | character varying(150)   |           | not null |                                  | extended |             |              |
 email        | character varying(254)   |           | not null |                                  | extended |             |              |
 is_staff     | boolean                  |           | not null |                                  | plain    |             |              |
 is_active    | boolean                  |           | not null |                                  | plain    |             |              |
 date_joined  | timestamp with time zone |           | not null |                                  | plain    |             |              |
 phone        | character varying(32)    |           |          |                                  | extended |             |              |
 address      | character varying(64)    |           |          |                                  | extended |             |              |

You can see that the new fields named phone and address have been added.

Django Admin

To display the custom user model in the Django admin panel you first need to create a new class that inherits from UserAdmin and then register it. Next, include phone and address in the fieldsets.

The final users/admin.py should look like this:

# users/admin.py

from django.contrib import admin
from django.contrib.auth.admin import UserAdmin

from users.models import User


class CustomUserAdmin(UserAdmin):
    fieldsets = UserAdmin.fieldsets + (
        ('Additional info', {'fields': ('phone', 'address')}),
    )


admin.site.register(User, CustomUserAdmin)

Run the server again, log in, and navigate to a random user. Scroll down to the bottom and you should see a new section with the new fields.

If you wish to customize the Django admin even further, take a look at The Django admin site from the official docs.

Rename User Table/Model

At this point, you can rename the user model and the table as you normally would.

To rename the user model, simply change the class name, and to rename the table change the db_table property:

# users/models.py

class User(AbstractUser):  # <-- you can change me
    phone = models.CharField(max_length=32, blank=True, null=True)
    address = models.CharField(max_length=64, blank=True, null=True)

    class Meta:
        db_table = 'auth_user'  # <-- you can change me

If you remove the db_table property the table name will fall back to <app_name>_<model_name>.

After you're done with your changes, run:

(venv)$ python manage.py makemigrations
(venv)$ python manage.py migrate

I generally wouldn't recommend renaming anything, because your database structure will become inconsistent. Some of the tables will have the users_ prefix, while some of them will have the auth_ prefix. But on the other hand, you could argue that the User model is now a part of the users app, so it shouldn't have the auth_ prefix.

In case you decide to rename the table, the final database structure will look similar to this:

django-todo=# \dt

                     List of relations
 Schema |            Name             | Type  |    Owner
--------+-----------------------------+-------+-------------
 public | auth_group                  | table | django-todo
 public | auth_group_permissions      | table | django-todo
 public | auth_permission             | table | django-todo
 public | django_admin_log            | table | django-todo
 public | django_content_type         | table | django-todo
 public | django_migrations           | table | django-todo
 public | django_session              | table | django-todo
 public | todo_task                   | table | django-todo
 public | todo_task_categories        | table | django-todo
 public | todo_taskcategory           | table | django-todo
 public | users_user                  | table | django-todo
 public | users_user_groups           | table | django-todo
 public | users_user_user_permissions | table | django-todo

Conclusion

Even though this problem of migrating to a custom user model mid-project has been around for quite a while there's still no official solution.

Unfortunately, a lot of Django developers have to go through this migration process, because the Django documentation doesn't emphasize enough that you should create a custom user model at the start of the project. Maybe they could even include it in the tutorial?

Hopefully the migration process I've presented in the article worked for you without any issues. In case something didn't work for you or you think something could be improved, I'd love to hear your feedback.

You can get the final source code from the django-custom-user repo.

Original article source at: https://testdriven.io/

#django #migrate 

How to Migrate To A Custom User Model Mid-project in Django

Phinx: PHP Database Migrations for Everyone

Phinx: Simple PHP Database Migrations

Intro

Phinx makes it ridiculously easy to manage the database migrations for your PHP app. In less than 5 minutes, you can install Phinx and create your first database migration. Phinx is just about migrations without all the bloat of a database ORM system or framework.

phinxterm

Features

  • Write database migrations using database agnostic PHP code.
  • Migrate up and down.
  • Migrate on deployment.
  • Seed data after database creation.
  • Get going in less than 5 minutes.
  • Stop worrying about the state of your database.
  • Take advantage of SCM features such as branching.
  • Integrate with any app.

Supported Adapters

Phinx natively supports the following database adapters:

  • MySQL
  • PostgreSQL
  • SQLite
  • Microsoft SQL Server

Install & Run

See version and branch overview for branch and PHP compatibility.

Composer

The fastest way to install Phinx is to add it to your project using Composer (https://getcomposer.org/).

Install Composer:

curl -sS https://getcomposer.org/installer | php

Require Phinx as a dependency using Composer:

php composer.phar require robmorgan/phinx

Install Phinx:

php composer.phar install

Execute Phinx:

php vendor/bin/phinx

As a Phar

You can also use the Box application to build Phinx as a Phar archive (https://box-project.github.io/box2/).

Clone Phinx from GitHub

git clone https://github.com/cakephp/phinx.git
cd phinx

Install Composer

curl -s https://getcomposer.org/installer | php

Install the Phinx dependencies

php composer.phar install

Install Box:

curl -LSs https://box-project.github.io/box2/installer.php | php

Create a Phar archive

php box.phar build

Documentation

Check out https://book.cakephp.org/phinx for the comprehensive documentation.

Other translations include:

Contributing

Please read the CONTRIBUTING document.

News & Updates

Follow @CakePHP on Twitter to stay up to date.

Limitations

PostgreSQL

Misc

Version History

Please read the release notes.

Check out book.cakephp.org/phinx (EN, ZH) for the comprehensive documentation.

Download Details:

Author: Cakephp
Source Code: https://github.com/cakephp/phinx 
License: MIT license

#php #cakephp #migrate #database

Phinx: PHP Database Migrations for Everyone
Hermann  Frami

Hermann Frami

1655411880

Serverless Plugin for Migrate

Serverless plugin for migrate

This is a plugin for the Serverless framework that allows you to manage and run database-agnostic migrations. To do so, it works on top of migrate.

Features

With this plugin you can

  • Make the commands of migrate available via the serverless CLI.
  • Be aware of the environment variables configured in your serverless.yml.
  • Add the env variable SERVERLESS_ROOT_PATH which points to the root directory of your project.
  • Configure aspects of your migration using your serverless.yml: no need to specify them as options with the CLI.
  • Set values to env variables just for the migration context.
  • Specify an custom character indicator of the last run migration.

Basically, these migrations can do anything that involves applying I/O changes and undo them. Watch the CHANGELOG to see what has been added to the date.

Quick start

To get into details, check out the example project of this repository. It contains a README with an explanation about all the valid commands and configuration variables you can use. For starters, this is what you must do to start working right away with migrations:

  1. Install serverless-migrate-plugin in your project:
npm i serverless-migrate-plugin
  1. Add it to your serverless.yml to the plugins section:
plugins: 
  - serverless-migrate-plugin
  1. Create your first migration:
sls migrate create -n <your-migration-name>

Now you are ready to implement your migrations. Once you have finished, you can run them using sls migrate upand sls migrate down. If you want to know more about any commands just run:

 sls migrate <command> --help

It is also recommended that you understand how the migrate library works, like how to create migrations.

Built With

  • Serverless framework: A powerful, unified experience to develop, deploy, test, secure, and monitor your Serverless applications.
  • Migrate: Abstract migration framework for node.
  • NodeJS: As runtime for Javascript 8+.

Author: EliuX
Source Code: https://github.com/EliuX/serverless-migrate-plugin 
License: MIT license

#serverless #cli #migrate #plugin 

Serverless Plugin for Migrate
Veronica  Roob

Veronica Roob

1651898040

Awesome PHP: Libraries to Help Manage Database Schemas and Migrations

Migrations

Libraries to help manage database schemas and migrations.

  • Doctrine Migrations - A migration library for Doctrine.
  • Migrations - A migration management library.
  • Phinx - Another database migration library.
  • PHPMig - Another migration management library.
  • Ruckusing - Database migrations for PHP ala ActiveRecord Migrations with support for MySQL, Postgres, SQLite.

Author: ziadoz
Source Code: https://github.com/ziadoz/awesome-php
License: WTFPL License

#php #migrate 

Awesome PHP: Libraries to Help Manage Database Schemas and Migrations
Veronica  Roob

Veronica Roob

1651890660

Database Migrations for PHP Ala ActiveRecord Migrations

Introduction

Ruckusing is a framework written in PHP5 for generating and managing a set of "database migrations". Database migrations are declarative files which represent the state of a DB (its tables, columns, indexes, etc) at a particular state of time. By using database migrations, multiple developers can work on the same application and be guaranteed that the application is in a consistent state across all remote developer machines.

The idea of the framework was borrowed from the migration system built into Ruby on Rails. Any one who is familiar with Migrations in RoR will be immediately at home.

Getting Started & Documentation

See the Wiki for the complete documentation on the migration methods supported and how to get started.

Databases Supported

  • Postgres
  • MySQL
  • Sqlite

Features

Portability: the migration files, which describe the tables, columns, indexes, etc to be created are themselves written in pure PHP5 which is then translated to the appropriate SQL at run-time. This allows one to transparently support any RDBMS with a single set of migration files (assuming there is an adapter for it, see below).

"rake" like support for basic tasks. The framework has a concept of "tasks" (in fact the primary focus of the framework, migrations, is just a plain task) which are just basic PHP5 classes which implement an interface. Tasks can be freely written and as long as they adhere to a specific naming convention and implement a specific interface, the framework will automatically register them and allow them to be executed.

The ability to go UP or DOWN to a specific migration state.

Code generator for generating skeleton migration files.

Support for module based migration directories where migrations files could be generated/run from specified module directories.

Out-of-the-box support for basic tasks like initializing the DB schema info table (db:setup), asking for the current version (db:version) and dumping the current schema (db:schema).

Limitations

  • PHP 5.2+ is a hard requirement. The framework employes extensive use of object-oriented features of PHP5. There are no plans to make the framework backwards compatible.

Configuration

  • Copy /path/to/ruckusing-migrations/config/database.inc.php to /path/to/mycodebase/ruckusing.conf.php and update the development key with your DB credentials:

type is one of pgsql, mysql, sqlite depending on your database, as well migrations_dir, db_dir, log_dir, ruckusing_base paths.

If you want to use module migration directories, Edit /path/to/mycodebase/ruckusing.conf.php and update migrations_dir like array('default' => '/default/path', 'module_name' => '/module/migration/path') paths.

Copy /path/to/ruckusing-migrations/ruckus.php to /path/to/mycodebase/ruckus.php.

Custom Tasks

All tasks in lib/Task are enabled by default. If you would like to implement custom tasks then you can specify the directory of your tasks in your over-ridden ruckusing.conf.php in the tasks_dir key:

# ruckusing.conf.php

return array(
 /* ... snip ... */,
 'tasks_dir' => RUCKUSING_WORKING_BASE . '/custom_tasks'
);

Generating Skeleton Migration files

From the top-level of your code base, run:

$ php ruckus.php db:generate create_users_table

Created OK
Created migration: 20121112163653_CreateUsersTable.php

Module migration directory example:

$ php ruckus.php db:generate create_items_table module=module_name

Created OK
Created migration: 20121112163653_CreateItemsTable.php

The generated file is in the migrations directory. Open up that file and you'll see it looks like:

class CreateUsersTable extends Ruckusing_Migration_Base {

	public function up() {

	}//up()

	public function down() {

	}//down()
}

All of the methods below are to be implemented in the up() and down() methods.

Environments

You can switch environments via the ENV command line argument. The default environtment is development.

To specify an additional environment add it to ruckusing.conf.php under the db key.

Running with a different environment:

$ ENV=test php db:migrate

Running Migrations

Run all pending migrations:

$ php ruckus.php db:migrate

Rollback the most recent migration:

$ php ruckus.php db:migrate VERSION=-1

Rollback to a specific migration (specify the timestamp in the filename of the migration to rollback to):

$ php ruckus.php db:migrate VERSION=20121114001742

Overview of the migration methods available

The available methods are (brief list below, with detailed usageg further down):

Database-level operations

  • create_database
  • drop_database

Table-level operations

  • create_table
  • drop_table
  • rename_table

Column-level operations

  • add_column
  • remove_column
  • rename_column
  • change_column
  • add_timestamps

Index-level operations

  • add_index
  • remove_index

Query execution

  • execute
  • select_one
  • select_all

Database-level Operations

There are two database-level operations, create_database and drop_database. Migrations that manipulate databases on this high of a level are used rarely.

Creating a new Database

This command is slightly useless since normally you would be running your migrations against an existing database (created and setup with whatever your traditional RDMBS creation methods are). However, if you wanted to create another database from a migration, this method is available:

Method Call: create_database

Parameters name : Name of the new database

Example:

    $this->create_database("my_project");

Removing a database

To completely remove a database and all of its tables (and data!).

Method Call: drop_database

Parameters name : Name of the existing database

Example:

    $this->drop_database("my_project");

This method is probably the most complex of all methods, but also one of the most widely used.

Method Call: create_table

Parameters

name : Name of the new table

options : (Optional) An associative array of options for creating the new table.

Supported option key/value pairs are:

id : Boolean - whether or not the framework should automatically generate a primary key. For MySQL the column will be called id and be of type integer with auto-incrementing.

options : A string representing finalization parameters that will be passed verbatim to the tail of the create table command. Often this is used to specify the storage engine for MySQL, e.g. 'options' => 'Engine=InnoDB'

Assumptions Ultimately this method delegates to the appropriate RDMBS adapter and the MySQL adapter makes some important assumptions about the structure of the table.

Table-level operations

The database migration framework offers a rich facility for creating, removing and renaming tables.

Creating tables

A call to $this->create_table(...) actually returns a TableDefinition object. This method of the framework is one of the very few which actually returns a result that you must interact with (as and end user).

The steps for creating a new table are:

  • Create the table with a name and any optional options and store the return value for later use:
    $users = $this->create_table("users");
  • Add columns to the table definition:
    $users->column("first_name", "string");
    $users->column("last_name", "string");
  • Call finish() to actually create the table with the definition and its columns:
    $users->finish();

By default, the table type will be what your database defaults too. To specify a different table type (e.g. InnoDB), pass a key of options into the $options array, e.g.

Example A: Create a new InnoDB table called users.

    $this->create_table('users', array('options' => 'Engine=InnoDB'));
  • This command also assumes that you want an id column. This column does not need to be specified, it will be auto-generated, unless explicitly told not to via the id key in $options array.

Example B: Create a new table called users but do not automatically make a primary key.

    $this->create_table('users', array('id' => false));

The primary key column will be created with attributes of int(11) unsigned auto_increment.

Example C: To specify your own primary key called 'guid':

    $t = $this->create_table('users', array('id' => false, 'options' => 'Engine=InnoDB'));
    $t->column('guid', 'string', array('primary_key' => true, 'limit' => 64));
    $t->finish();

Removing tables

Tables can be removed by using the drop_table method call. As might be expected, removing a table also removes all of its columns and any indexes.

Method Call: drop_table

Arguments:: table_name: The name of the table to remove.

Example:

   $this->drop_table("users");

Renaming tables

Tables can be renamed using the rename_table method.

Method Call: rename_table

Arguments:: table_name: The existing name of the table. new_name: The new name of the table.

Example:

   // rename from "users" to "people"
   $this->rename_table("users", "people");

Column-level operations

Adding a new column to a table

For the complete documentation on adding new columns, please see Adding Columns

Removing Columns

Removing a database column is very simple, but keep in mind that any index associated with that column will also be removed.

Method call: remove_column

Arguments table_name: The name of the table from which the column will be removed.

column_name: The column to be removed.

Example A:: Remove the age column from the users table.

    $this->remove_column("users", "age");

Renaming a column

Database columns can be renamed (assuming the underlying RDMBS/adapter supports it).

Method call: rename_column

Arguments: table_name: The name of table from which the column is to be renamed.

column_name: The existing name of the column.

new_column_name: The new name of the column.

Example A: From the users table, rename first_name to fname

    $this->rename_column("users", "first_name", "fname");

Modifying an existing column

The type, defaults or NULL support for existing columns can be modified. If you want to just rename a column then use the rename_column method. This method takes a generalized type for the column's type and also an array of options which affects the column definition. For the available types and options, see the documentation on adding new columns, AddingColumns.

Method Call: change_column

Arguments: table_name: The name of the table from which the column will be altered.

column_name: The name of the column to change.

type: The desired generalized type of the column.

options: (Optional) An associative array of options for the column definition.

Example A: From the users table, change the length of the first_name column to 128.

    $this->change_column("users", "first_name", "string", array('limit' => 128) );

Add timestamps columns

We often need colunmns to timestamp the created at and updated at operations. This convenient method is here to easily generate them for you.

Method Call:add_timestamps

Arguments: table_name: The name of the table to which the columns will be added

created_name: The desired of the created at column, be default created_at

updated_name: The desired of the updated at column, be default updated_at

Exemple A: Add timestamps columns to users table.

    $this->add_timestamps("users");

Exemple B: Add timestamps columns to users table with created and updated column names.

    $this->add_timestamps("users", "created", "updated");

Index-level operations

Indexes can be created and removed using the framework methods.

Adding a new index

Method Call: add_index

Arguments: table: The name of the table to add the index to.

column: The column to create the index on. If this is a string, then it is presumed to be the name of the column, and the index will be a single-column index. If it is an array, then it is presumed to be a list of columns name and the index will then be a multi-column index, on the columns specified.

options: (Optional) An associative array of options to control the index generation. Keys / Value pairs:

unique: values: true or false. Whether or not create a unique index for this column. Defaults to false.

name : values: user defined. The name of the index. If not specified, a default name will be generated based on the table and column name.

Known Issues / Workarounds: MySQL is currently limited to 64 characters for identifier names. When add_index is used without specifying the name of the index, Ruckusing will generate a suitable name based on the table name and the column(s) being index. For example, if there is a users table and an index is being generated on the username column then the generated index name would be: idx_users_username . If one is attempting to add a multi-column index then its very possible that the generated name would be longer than MySQL's limit of 64 characters. In such situations Ruckusing will raise an error suggesting you use a custom index name via the name option parameter. See Example C.

Example A: Create an index on the email column in the users table.

    $this->add_index("users", "email");

Example B: Create a unqiue index on the ssn column in the users table.

    $this->add_index("users", "ssn", array('unique' => true)));

Example C: Create an index on the blog_id column in the posts table, but specify a specific name for the index.

    $this->add_index("posts", "blog_id", array('name' => 'index_on_blog_id'));

Example D: Create a multi-column index on the email and ssn columns in the users table.

    $this->add_index("users", array('email', 'ssn') );

Removing an index

Easy enough. If the index was created using the sibling to this method (add_index) then one would need to just specify the same arguments to that method (but calling remove_index).

Method Call: remove_index

Arguments: table_name: The name of the table to remove the index from.

column_name: The name of the column from which to remove the index from.

options: (Optional) An associative array of options to control the index removal process. Key / Value pairs: name : values: user defined. The name of the index to remove. If not specified, a default name will be generated based on the table and column name. If during the index creation process (using the add_index method) and a name is specified then you will need to do the same here and specify the same name. Otherwise, the default name that is generated will likely not match with the actual name of the index.

Example A: Remove the (single-column) index from the users table on the email column.

    $this->remove_index("users", "email");

Example B: Remove the (multi-column) index from the users table on the email and ssn columns.

    $this->remove_index("users", array("email", "ssn") );

Example C: Remove the (single-column) named index from the users table on the email column.

    $this->remove_index("users", "email", array('name' => "index_on_email_column") );

Query Execution

Arbitrary query execution is available via a set of methods.

Execute method

The execute() method is intended for queries which do not return any data, e.g. INSERT, UPDATE or DELETE.

Example A: Update all rows give some criteria

    $this->execute("UPDATE foo SET name = 'bar' WHERE .... ");

Queries that return results

For queries that return results, e.g. SELECT queries, then use either select_one or select_all depending on what you are returning.

Both of these methods return an associative array with each element of the array being itself another associative array of the column names and their values.

select_one() is intended for queries where you are expecting a single result set, and select_all() is intended for all other cases (where you might not necessarily know how many rows you will be getting).

NOTE: Since these methods take raw SQL queries as input, they might not necessarily be portable across all RDBMS.

Example A (select_one): Get the sum of of a column

    $result = $this->select_one("SELECT SUM(total_price) AS total_price FROM orders");
    if($result) {
     echo "Your revenue is: " . $result['total_price'];
    }

**Example B (select_all): **: Get all rows and iterate over each one, performing some operation:

    $result = $this->select_all("SELECT email, first_name, last_name FROM users WHERE created_at >= SUBDATE( NOW(), INTERVAL 7 DAY)");

    if($result) {
      echo "New customers: (" . count($result) . ")\n";
      foreach($result as $row) {
        printf("(%s) %s %s\n", $row['email'], $row['first_name'], $row['last_name']);
      }
    }

Testing

The unit tests require phpunit to be installed: http://www.phpunit.de/manual/current/en/installation.html

Running the complete test suite

$ vi config/database.inc.php
$ mysql -uroot -p < tests/test.sql
$ psql -Upostgres -f tests/test.sql
$ phpunit

Will run all test classes in tests/unit.

Running a single test file

$ vi config/database.inc.php
$ mysql -uroot -p < tests/test.sql
$ phpunit tests/unit/MySQLAdapterTest.php

Some of the tests require a mysql_test or pg_test database configuration to be defined. If this is required and its not satisfied than the test will complain appropriately.

Author: ruckus
Source Code: https://github.com/ruckus/ruckusing-migrations
License: View license

#php #migrate 

Database Migrations for PHP Ala ActiveRecord Migrations
Veronica  Roob

Veronica Roob

1651860000

Migrations: PHP 5.3 Migration Manager

#What are Database Migrations?

Migrations are a convenient way for you to alter your database in a structured and organized manner. You could edit fragments of SQL by hand but you would then be responsible for telling other developers that they need to go and run them. You’d also have to keep track of which changes need to be run against the production machines next time you deploy.

Above from Rails guide.

This Migrations library was inspired by earlier works such as mysql-php-migrations, and implementations found in both Codeigniter and Fulephp frameworks.

##Whats different?

  1. Written with php 5.3 and uses Symfony2 components and Doctrine DBAL
  2. Allows each project to define templates using Twig.
  3. Uses Doctrine DBAL Schema manager to write platform independent migrations or use normal SQL DDL to control your database.
  4. All commands accept a DSN allowing scripting to apply your migrations to many databases.

##Getting Started

###Installing

This library can be accessed through Composer

Using dev mode as most likely don't want this component in a release cycle.

Create composer.json add add the following.

{    "require" : {    },    "require-dev" : {        "icomefromthenet/migration" : "dev-master"    } }

##Running the commands

Create the project folder and then run the int function using the vendor bin migrate.php. Note all commands are prefixed with app:

mkdir migrations
cd migrations
../vendor/bin/migrate.php app:init 

Create the Config for your database answer the questions and a config will be created.

../vendor/bin/migrate.php app:config 

Run install to add migrations tacking database table to the schema:

../vendor/bin/migrate.php app:install 

Add your first migration by using the add command (optional description slug):

../vendor/bin/migrate.php app:add #prefix# 

Run up command to install the change

../vendor/bin/migrate.php app:up 1

Run status to find the head migration

../vendor/bin/migrate.php app:status

 

Run status to find the head migration

../vendor/bin/migrate.php app:status

Requirements

  • php 5.3
  • CLI.
  • SPL
  • PDO
  • Composer

Author: icomefromthenet
Source Code: https://github.com/icomefromthenet/Migrations
License: MIT License

#php #migrate 

Migrations: PHP 5.3 Migration Manager

Django-allauth: A simple Boilerplate to Setup Authentication

Django-Authentication 

A simple Boilerplate to Setup Authentication using Django-allauth, with a custom template for login and registration using django-crispy-forms.

Getting Started

Prerequisites

  • Python 3.8.6 or higher

Project setup

# clone the repo
$ git clone https://github.com/yezz123/Django-Authentication

# move to the project folder
$ cd Django-Authentication

Creating virtual environment

  • Create a virtual environment for this project:
# creating pipenv environment for python 3
$ virtualenv venv

# activating the pipenv environment
$ cd venv/bin #windows environment you activate from Scripts folder

# if you have multiple python 3 versions installed then
$ source ./activate

Configured Enviromment

Environment variables

SECRET_KEY = #random string
DEBUG = #True or False
ALLOWED_HOSTS = #localhost
DATABASE_NAME = #database name (You can just use the default if you want to use SQLite)
DATABASE_USER = #database user for postgres
DATABASE_PASSWORD = #database password for postgres
DATABASE_HOST = #database host for postgres
DATABASE_PORT = #database port for postgres
ACCOUNT_EMAIL_VERIFICATION = #mandatory or optional
EMAIL_BACKEND = #email backend
EMAIL_HOST = #email host
EMAIL_HOST_PASSWORD = #email host password
EMAIL_USE_TLS = # if your email use tls
EMAIL_PORT = #email port

change all the environment variables in the .env.sample and don't forget to rename it to .env.

Run the project

After Setup the environment, you can run the project using the Makefile provided in the project folder.

help:
 @echo "Targets:"
 @echo "    make install" #install requirements
 @echo "    make makemigrations" #prepare migrations
 @echo "    make migrations" #migrate database
 @echo "    make createsuperuser" #create superuser
 @echo "    make run_server" #run the server
 @echo "    make lint" #lint the code using black
 @echo "    make test" #run the tests using Pytest

Preconfigured Packages

Includes preconfigured packages to kick start Django-Authentication by just setting appropriate configuration.

PackageUsage
django-allauthIntegrated set of Django applications addressing authentication, registration, account management as well as 3rd party (social) account authentication.
django-crispy-formsdjango-crispy-forms provides you with a crispy filter and {% crispy %} tag that will let you control the rendering behavior of your Django forms in a very elegant and DRY way.

Contributing

  • Django-Authentication is a simple project, so you can contribute to it by just adding your code to the project to improve it.
  • If you have any questions, please feel free to open an issue or create a pull request.

Download Details:
Author: yezz123
Source Code: https://github.com/yezz123/Django-Authentication
License: MIT License

#django #python 

Django-allauth: A simple Boilerplate to Setup Authentication

daylan frazer

1626955941

How to Migrate Outlook to Office 365 Account – Verified Solution

Are you planning to migrate Outlook to Office 365 account? If Yes, then do not need to worry. Free download and try DotStella Outlook to Office 365 migration tool, which gives a direct option to export Outlook files to your Office 365 account.

No advanced technical knowledge is required to use this software. It will help users to deal with the transfer of email, contacts, calendars and other Microsoft Outlook items to the Microsoft O365 account.

Microsoft Outlook is installed on my computer. It stores mailbox data in OST format. We are currently planning to migrate OST files to your Office 365 account. Outlook is a desktop-based email client that allows you to manage your personal information such as email, contacts, and calendars.

In addition, Microsoft Office 365 is a cloud based personal information management center. Today, the cloud is the need of the hour. Many business users are working hard to host their assets in the cloud.

Similarly, I also want to upload the data in the OST file to an Office 365 account. Unfortunately, I don’t know what I should do.

I had tried many applications and searched for them on the internet but did not find a reliable application. Please suggest an application if you have in mind to export Outlook files into Office 365 account successfully.

Smart Solution to Migrate Outlook to Office 365 Account

DotStella Outlook Migrator is a great solution that gives users the ability to directly import PST to Office 365 webmail. To complete the migration, the user only needs to enter the login information for their Office 365 account. This software also includes an advanced Admin option that allows users to migrate their OST file data to their Office 365 admin account.

Trial Limitation: The free trial edition of the OST to Office 365 conversion utility allows users to export the first 10 emails from each folder. If the user wants to export more than 10 emails, he/she must first activate the software by purchasing a license key.

Working Steps of the Software

  • First, you need to install the proficient Outlook to Office 365 migration utility on your Windows system. Next, go through the steps mentioned below.

  • Run the application and click the “Open” button. Now load Outlook mailboxes into the software bar in different modes: select file, select folder and configured account. Select the option you want.
    This is image title

  • Now the program scans the selected file and folder and upon completion displays a list of mailboxes in the program interface. Here, **mark the box **for the desired mailbox.

  • When you tap on a mailbox, the tool lists all the emails and other data into the interface. Hit on the email to see more details in a special preview window.
    This is image title

  • To view the attachment, right-click and select Open > Save > Save All. In this way, you can also see the email attachments in Outlook.

  • Then go to the Export tab in the menu and select Office 365 as saving format. Add your credentials and click Save to complete the migration process from Outlook to Office 365 email.
    This is image title

Outstanding Features of Outlook to Office 365 Migration Tool

This tool is designed using advanced algorithms to help users cope with the entire email conversion process. It will bring many benefits to the users because of its amazing features. Here are some of the key features of the OST to Office 365 migration tool.

  • The software supports batch conversion of Outlook emails to MS Office 365 account.
  • You can successfully transfer unlimited OST files into your Office 365 account with no file size limit.
  • The OST to Office 365 migrator is a completely self-contained application that can work without installing the Microsoft Outlook email client.
  • This is a very easy-to-use and user-friendly application, which allows users to move and migrate their data in a risk-free way.
  • Users can load the data in the OST file into the software panel in two ways. That is, it can be loaded automatically from the profile location configured by default or manually from a particular user folder location.
  • This application will list all the items in the Outlook mailbox folder in the panel. The user can only select the required items from the mailbox folder as needed.
  • A user can also analyze the migration process from Outlook to Office 365 live directly on the software panel.
  • This software is fully compatible with all latest versions of Windows devices.

Final Thoughts

The above step-by-step guide to migrate Outlook to Office 365 makes it easy to export Outlook emails, contacts, calendars, tasks and emails to O365. Shares information on an expert recommended solution for migrating data from Outlook to your Office 365 account with attachments.

#outlook #office #migrate #migration #software #tool

Migrate WordPress to Scully

Blog images and Scully
Bad news for you. Scully does not know what to do with images, it skips them for conversion, but still it doesn’t copy them in the right directory together with compiled HTML.
Good news - Scully has a plugin system. If you want to know how to write Scully plugins please check this article by Sam Vloeberghs, it’s great!
Scully plugin to copy images
We want Scully to copy images from source of md files to compiled html files. For that to happen, we will create a small image plugin(image.scully.plugin.ts)

#wordpress #scully #migrate

Migrate WordPress to Scully

Sandy Aniston

1621609288

How to Migrate Kerio to Another Server in Quick & Simple Manner?

Are you facing any problem while trying to migrate Kerio to New Server? If Yes, then don’t worry, I will explain here the best way to export Kerio Mail Server to Another Server in a quick & simplified manner.

This blog briefly explains how Kerio Connect migrates to a new server. We offer here a simplified solution to this problem. Many users are looking for a solution to this problem. Don’t worry, the user will get a complete and reliable solution.

Many users of the Kerio connection try to move their Kerio connection to another server. A big disadvantage of Kerio Connect is that the email does not come from the mail server.

As a result, users try to provide login information for Kerio. Even the users don’t know the proper solution to complete this task.

In principle, Kerio Connect does not provide a solution to move Kerio Connect to a new server. Therefore, users must use third-party solutions for this task.

The Kerio Converter application is one of the best solution available at present time to transfer data into a New server. This is a quick and easy way to do it without losing data.

The best tool to move mailboxes from Kerio to a New Server:

As mentioned above, not all technology solutions are suitable for users. However, Kerio Migrator is one of the best solutions due to its advanced features.

The user can make changes quickly without any additional effort. This program is also suitable for non-technical users. This tool can be used by technical and non-technical users.

It has a fast and intuitive graphical user interface that does not need any help. When sending an email, Kerio also saves all articles, content, and attachments. Additionally, users can export selected Kerio folders from an entire mailbox.

User can use professional solution to move mailboxes from Kerio to new server. Kerio Migrator users can do this very easily with RecoveryTools.

This tool gives the freedom to export the selected Kerio mailbox folder with all selected user data. Save all emails, components and attachments during the conversion process.

Simple steps to transfer Kerio to Another Server:

Step 1: Download, Install & Run Kerio Converter software.

Step 2: This software offers two ways to transfer Kerio emails to new server. If necessary, click Select File or Select Folder.

Step 3: Select the file format and email the desired clients from the Save Options list
This is image title

Step 4: Select the desired location to save the resulting data for easy access and management of the data items without any hassle.

Step 5: Finally, click the Convert button to start the Kerio migration process.

Amazing Features of Kerio Migration Tool:

  • You can use this tool to manage the folder hierarchy. When the user works with the application, Kerio Mail Server saves the output file exactly on the new server.

  • This amazing Kerio to New server migration tool enables users to transfer data from Kerio Mail Server directly to Live or Hosted server. By simply downloading and investing in this tool, users can transfer data from Kerio Mail Server to new Server.

  • This reliable tool will help you choose the data you want to upload Kerio mailboxes on the new server.

  • The application also enables automatic email forwarding to facilitate forwarding of most emails to another server.

  • The software seamlessly migrates unlimited data from Kerio Mail Server to Live Exchange Server, Hosted Exchange Server, Office 365, Gmail, Rediffmail, etc.

  • 100% safe way to migrate Kerio Connect mailboxes to New Server.

Final Words:

In the above article, I had explained the best way to migrate Kerio mailboxes to new server. A user can successfully import Kerio emails, contacts, calendars to another server without any issue.

#migrate #kerio #to #new #server