Now save up to 52 percent when migrating to Azure Databricks

More than ever before, companies are relying on their big data and artificial intelligence (AI) systems to find new ways to reduce costs and accelerate decision-making. However, customers using on-premises systems struggle to realize these benefits due to administrative complexity, inability to scale their fixed infrastructure cost-effectively, and lack of a shared collaborative environment for data engineers, data scientists and developers.

To make it easier for customers to modernize their on-premises Spark and big data workloads to the cloud, we’re announcing a new migration offer with Azure Databricks. The offer includes:

  • **Up to a 52 percent discount **over the pay-as-you-go pricing when using the Azure Databricks Unit pre-purchase plans. This means that customers can free themselves from the complexities and constraints of their on-premises solutions and realize the benefits of the fully managed Azure Databricks service at a significant discount.
  • **Free migration assessment **for qualified customers.

#announcements #big data #migration

What is GEEK

Buddha Community

Now save up to 52 percent when migrating to Azure Databricks
Sasha  Roberts

Sasha Roberts

1659500100

Reform: Form Objects Decoupled From Models In Ruby

Reform

Form objects decoupled from your models.

Reform gives you a form object with validations and nested setup of models. It is completely framework-agnostic and doesn't care about your database.

Although reform can be used in any Ruby framework, it comes with Rails support, works with simple_form and other form gems, allows nesting forms to implement has_one and has_many relationships, can compose a form from multiple objects and gives you coercion.

Full Documentation

Reform is part of the Trailblazer framework. Full documentation is available on the project site.

Reform 2.2

Temporary note: Reform 2.2 does not automatically load Rails files anymore (e.g. ActiveModel::Validations). You need the reform-rails gem, see Installation.

Defining Forms

Forms are defined in separate classes. Often, these classes partially map to a model.

class AlbumForm < Reform::Form
  property :title
  validates :title, presence: true
end

Fields are declared using ::property. Validations work exactly as you know it from Rails or other frameworks. Note that validations no longer go into the model.

The API

Forms have a ridiculously simple API with only a handful of public methods.

  1. #initialize always requires a model that the form represents.
  2. #validate(params) updates the form's fields with the input data (only the form, not the model) and then runs all validations. The return value is the boolean result of the validations.
  3. #errors returns validation messages in a classic ActiveModel style.
  4. #sync writes form data back to the model. This will only use setter methods on the model(s).
  5. #save (optional) will call #save on the model and nested models. Note that this implies a #sync call.
  6. #prepopulate! (optional) will run pre-population hooks to "fill out" your form before rendering.

In addition to the main API, forms expose accessors to the defined properties. This is used for rendering or manual operations.

Setup

In your controller or operation you create a form instance and pass in the models you want to work on.

class AlbumsController
  def new
    @form = AlbumForm.new(Album.new)
  end

This will also work as an editing form with an existing album.

def edit
  @form = AlbumForm.new(Album.find(1))
end

Reform will read property values from the model in setup. In our example, the AlbumForm will call album.title to populate the title field.

Rendering Forms

Your @form is now ready to be rendered, either do it yourself or use something like Rails' #form_for, simple_form or formtastic.

= form_for @form do |f|
  = f.input :title

Nested forms and collections can be easily rendered with fields_for, etc. Note that you no longer pass the model to the form builder, but the Reform instance.

Optionally, you might want to use the #prepopulate! method to pre-populate fields and prepare the form for rendering.

Validation

After form submission, you need to validate the input.

class SongsController
  def create
    @form = SongForm.new(Song.new)

    #=> params: {song: {title: "Rio", length: "366"}}

    if @form.validate(params[:song])

The #validate method first updates the values of the form - the underlying model is still treated as immutuable and remains unchanged. It then runs all validations you provided in the form.

It's the only entry point for updating the form. This is per design, as separating writing and validation doesn't make sense for a form.

This allows rendering the form after validate with the data that has been submitted. However, don't get confused, the model's values are still the old, original values and are only changed after a #save or #sync operation.

Syncing Back

After validation, you have two choices: either call #save and let Reform sort out the rest. Or call #sync, which will write all the properties back to the model. In a nested form, this works recursively, of course.

It's then up to you what to do with the updated models - they're still unsaved.

Saving Forms

The easiest way to save the data is to call #save on the form.

if @form.validate(params[:song])
  @form.save  #=> populates album with incoming data
              #   by calling @form.album.title=.
else
  # handle validation errors.
end

This will sync the data to the model and then call album.save.

Sometimes, you need to do saving manually.

Default values

Reform allows default values to be provided for properties.

class AlbumForm < Reform::Form
  property :price_in_cents, default: 9_95
end

Saving Forms Manually

Calling #save with a block will provide a nested hash of the form's properties and values. This does not call #save on the models and allows you to implement the saving yourself.

The block parameter is a nested hash of the form input.

  @form.save do |hash|
    hash      #=> {title: "Greatest Hits"}
    Album.create(hash)
  end

You can always access the form's model. This is helpful when you were using populators to set up objects when validating.

  @form.save do |hash|
    album = @form.model

    album.update_attributes(hash[:album])
  end

Nesting

Reform provides support for nested objects. Let's say the Album model keeps some associations.

class Album < ActiveRecord::Base
  has_one  :artist
  has_many :songs
end

The implementation details do not really matter here, as long as your album exposes readers and writes like Album#artist and Album#songs, this allows you to define nested forms.

class AlbumForm < Reform::Form
  property :title
  validates :title, presence: true

  property :artist do
    property :full_name
    validates :full_name, presence: true
  end

  collection :songs do
    property :name
  end
end

You can also reuse an existing form from elsewhere using :form.

property :artist, form: ArtistForm

Nested Setup

Reform will wrap defined nested objects in their own forms. This happens automatically when instantiating the form.

album.songs #=> [<Song name:"Run To The Hills">]

form = AlbumForm.new(album)
form.songs[0] #=> <SongForm model: <Song name:"Run To The Hills">>
form.songs[0].name #=> "Run To The Hills"

Nested Rendering

When rendering a nested form you can use the form's readers to access the nested forms.

= text_field :title,         @form.title
= text_field "artist[name]", @form.artist.name

Or use something like #fields_for in a Rails environment.

= form_for @form do |f|
  = f.text_field :title

  = f.fields_for :artist do |a|
    = a.text_field :name

Nested Processing

validate will assign values to the nested forms. sync and save work analogue to the non-nested form, just in a recursive way.

The block form of #save would give you the following data.

@form.save do |nested|
  nested #=> {title:  "Greatest Hits",
         #    artist: {name: "Duran Duran"},
         #    songs: [{title: "Hungry Like The Wolf"},
         #            {title: "Last Chance On The Stairways"}]
         #   }
  end

The manual saving with block is not encouraged. You should rather check the Disposable docs to find out how to implement your manual tweak with the official API.

Populating Forms

Very often, you need to give Reform some information how to create or find nested objects when validateing. This directive is called populator and documented here.

Installation

Add this line to your Gemfile:

gem "reform"

Reform works fine with Rails 3.1-5.0. However, inheritance of validations with ActiveModel::Validations is broken in Rails 3.2 and 4.0.

Since Reform 2.2, you have to add the reform-rails gem to your Gemfile to automatically load ActiveModel/Rails files.

gem "reform-rails"

Since Reform 2.0 you need to specify which validation backend you want to use (unless you're in a Rails environment where ActiveModel will be used).

To use ActiveModel (not recommended because very out-dated).

require "reform/form/active_model/validations"
Reform::Form.class_eval do
  include Reform::Form::ActiveModel::Validations
end

To use dry-validation (recommended).

require "reform/form/dry"
Reform::Form.class_eval do
  feature Reform::Form::Dry
end

Put this in an initializer or on top of your script.

Compositions

Reform allows to map multiple models to one form. The complete documentation is here, however, this is how it works.

class AlbumForm < Reform::Form
  include Composition

  property :id,    on: :album
  property :title, on: :album
  property :songs, on: :cd
  property :cd_id, on: :cd, from: :id
end

When initializing a composition, you have to pass a hash that contains the composees.

AlbumForm.new(album: album, cd: CD.find(1))

More

Reform comes many more optional features, like hash fields, coercion, virtual fields, and so on. Check the full documentation here.

Reform is part of the Trailblazer project. Please buy my book to support the development and learn everything about Reform - there's two chapters dedicated to Reform!

Security And Strong_parameters

By explicitly defining the form layout using ::property there is no more need for protecting from unwanted input. strong_parameter or attr_accessible become obsolete. Reform will simply ignore undefined incoming parameters.

This is not Reform 1.x!

Temporary note: This is the README and API for Reform 2. On the public API, only a few tiny things have changed. Here are the Reform 1.2 docs.

Anyway, please upgrade and report problems and do not simply assume that we will magically find out what needs to get fixed. When in trouble, join us on Gitter.

Full documentation for Reform is available online, or support us and grab the Trailblazer book. There is an Upgrading Guide to help you migrate through versions.

Attributions!!!

Great thanks to Blake Education for giving us the freedom and time to develop this project in 2013 while working on their project.


Author: trailblazer
Source code: https://github.com/trailblazer/reform
License:  MIT license

#ruby  #ruby-on-rails

Adaline  Kulas

Adaline Kulas

1594166040

What are the benefits of cloud migration? Reasons you should migrate

The moving of applications, databases and other business elements from the local server to the cloud server called cloud migration. This article will deal with migration techniques, requirement and the benefits of cloud migration.

In simple terms, moving from local to the public cloud server is called cloud migration. Gartner says 17.5% revenue growth as promised in cloud migration and also has a forecast for 2022 as shown in the following image.

#cloud computing services #cloud migration #all #cloud #cloud migration strategy #enterprise cloud migration strategy #business benefits of cloud migration #key benefits of cloud migration #benefits of cloud migration #types of cloud migration

Annie  Emard

Annie Emard

1650467280

I-Code CNES: Help Developers Code Compliant with CNES Coding Rules

i-Code CNES is a static code analysis tool to help developers write code compliant with CNES coding rules for Fortran 77, Fortran 90 and Shell.

All the information on CNES standards coverage, and rules availabilities and limitations can be read in the documentation.

Quick start

  • Download latest i-Code version on GitHub Releases.
  • Unzip i-Code archive where you need it.
  • Add icode to your path.
  • Grant icode execution permission.
  • Run icode path/to/project/directory.

i-Code products

i-Code Core

This is the core library containing all i-Code utilities for code analysis.

i-Code Library

This is the full library containing all official checkers. It includes i-Code Core.

i-Code App or i-Code CLI

This is the common command line application for i-Code.

i-Code IDE

This is the common GUI application for i-Code.

i-Code plugin for Eclipse

The Eclipse plugin for i-Code allows to use i-Code from Eclipse IDE.

i-Code plugin for SonarQube

The SonarQube plugin for i-Code allows to use i-Code through SonarQube analysis. Please refer to sonar-icode-cnes-plugin for more details.

Installation

i-Code CLI

Just unzip the corresponding archive.

i-Code IDE

Just unzip the corresponding archive.

i-Code plugin for Eclipse

Refer to Eclipse documentation to know how to install a standard Eclipse plugin.

i-Code plugin for SonarQube

Refer to SonarQube documentation to know how to install a standard SonarQube plugin.

Get help

Use icode -h to get the following help about i-Code:

usage: icode [<FILE> [...]] [-c <arg>] [-e] [-f <arg>] [-h] [-l] [-o <arg>] [-p <arg>] [-q <arg>] [-r] [-v] [-x <arg>]
Analyze Shell, F77 & F90 code to find defects & bugs.

 -c,--checked-languages <arg>        Comma separated list of languages checked during analysis. All by default.
 -e,--exporters                      Display all available exporters.
 -f,--export-format <arg>            Set the format for result file. Default format is XML.
 -h,--help                           Display this message.
 -l,--languages                      Display all available languages.
 -o,--output <arg>                   Set the name for result file. Results are displayed in standard output by default.
 -p,--export-parameters <arg>        Comma separated list of parameters for the export. Format is:
                                     key1=value1,key2=value2,key3=value3. Default values depend on the chosen export plugin.
 -q,--list-export-parameters <arg>   Display all available parameters for the given export.
 -r,--rules                          Display all available rules.
 -v,--version                        Display version information.
 -x,--excluded-rules <arg>           Comma separated list of rules id to exclude from analysis. None by default.


Please report issues at https://github.com/lequal/i-CodeCNES/issues

Build

You can easily rebuild all i-Code products with Maven:

git clone https://github.com/lequal/i-CodeCNES icode
cd ./icode/
mvn clean install

Extending i-Code with your own plugin

If you need to add some new feature, the easiest way is to implment your own plugin by forking icode-custom-plugin-example and its dedicated Developer Guide.

Changelog

Release 4.1.0

New features

  •  FEATURE #198 > i-Code should not crash on checker error
  •  FEATURE #201 > Simplify logged information

Fixed bugs

  •  BUG #131 > Correction erreur COM.xxx
  •  BUG #147 > Test files may have a "contains" instead of an "equals" to check the location value
  •  BUG #197 > IndexOutOfBoundsException while analyzing an empty Shell script
  •  BUG #200 > JAXB is still used but missing in icode-library

Release 4.0.0

New features

  • Complete refactoring of i-Code architecture
  • Deletion of RCP in command line
  • Add version argument in command line
  • Run Jflex through maven #165
  • Jflex version update #165
  • Transform eclipse plugin into Java plugin #165
  • Command line support directory as argument: files inclusion wil be recursively included #161 #157
  • Deletion of parallelized checkers running #161
  • Refactor test as parametrized tests #165
  • Change exe to bat and bash scripts #165
  • Allow to load plugins which are dropped in plugins directory #165
  • Update packaging of i-Code #165
  • Update CI #145
  • Reintegrate RCP as a submodule using i-Code Core #165
  • This 4.0.0 version integrate a whole new architecture described in https://github.com/lequal/i-CodeCNES/wiki
    • icode-core: contains core feature to build analyzer
    • icode-library: contains the minimal classes to run i-Code analyzers in a Java application
    • icode-app: a standalone command line version of i-Code analyzer
    • icode-ide: contains the i-Code IDE version and Eclipse plugin
    • *-language, *-rules and *-metrics: contain analyzer for several languages: Shell, Fortran, ...
  • The new documentation is available as a wiki https://github.com/lequal/i-CodeCNES/wiki
  • A Developer Guide is now available here: https://github.com/lequal/icode-custom-plugin-example/wiki/Developer-guide
  • Users are able to add custom plugins by putting their jar files into icode/plugins/ directory
  • Bug about recursive analysis is fixed and users can now simply analyze a directory, e.g.: icode .
  • The continuous integration was enhanced with Travis(https://travis-ci.org/lequal/i-CodeCNES) and SonarCloud(https://sonarcloud.io/dashboard?id=lequal_i-CodeCNES)
  • The contributing page and issue templates were updated
  • Eclipse RCP was removed from core features of i-Code
  • Some other minor enhancements and fixes
  • Fix #157: Recursive search of files for analysis command line enhancement
  • Fix #145: Connect i-Code build to SonarCloud enhancement
  • Fix #142: Add i-Code version to the xml results file enhancement
  • Fix #166: Bad support of heredoc notations bug shell
  • Fix #165: Refactor i-Code architecture enhancement
  • Fix #161: Files handling and recursive analysis issue command line fortran to analyse
  • Fix #170: Combined standalone subroutine + module in same file is crashing scanner in FORTRAN file fortran
  • Fix #159: SHDESIGNOptions false positive false positive shell
  • Fix #168: icode commandline gives incorrect and cryptic error when encountering '))' bug shell
  • Fix #158: Inconsistency between "RNC shell SH.FLOW.CheckUser example" and "I-code COMDESIGNActiveWait LEX" false positive shell
  • Fix #186: Links in README point to sonar-icode bug documentation
  • Fix #187: Consolidate community documentation documentation

Release 3.1.0

New features

  • New command line #133
  • New parsing error handling, a violation named "Parser error" is added instead of suspend the analysis. #154
  • New rules (Shell)
    • COM.DATA.Initialisation ( fix #113 )
    • COM.DATA.Invariant ( fix #114 )
    • COM.FLOW.FilePath ( fix #115 )
    • COM.FLOW.Recursion ( fix #116 )
    • COM.INST.BoolNegation ( fix #117 )
    • COM.NAME.Homonymy ( fix #118 )
    • COM.PRES.Indent ( fix #119 )
    • COM.PRES.LengthLine ( fix #120 )
    • SH.FLOW.CheckCodeReturn ( fix #121 )
    • SH.Ref.Export ( fix #122 #52 #138 #137 )
    • SH.SYNC.Signals #123
  • New metrics
    • SH.MET.LineOfComment
    • F77.MET.LineOfComment
    • F90.MET.LineOfComment

Fixes

  • Shell
    • All checkers :
      • Function correction on FUNCSTART and FNAME #138 #137 #150
    • COM.FLOW.CaseSwitch :
      • Case handling fixed #135
      • Function localization fixed #52
    • COM.DATA.LoopCondition
      • Function localization fixed #52
    • COM.DESIGN.ActiveWait
      • Function localization fixed #52
    • COM.FLOW.Abort
      • Function localization fixed #52

Release 3.0.1

  • Fix of Eclipse's plug-in performances #101

Release 3.0.0

New features

  • Command line for Windows, MacOS & Linux #64
  • Standalone version i-Code CNES IDE #1
  • New Extension Points
    • To add languages #32
    • To add checkers #23
    • To add configurations
    • To add exports #19 #26
  • API
    • To run analysis #16
    • To export analysis #19  #26
    • To reach configurations & preferences
  • Shells metrics (SH.MET.LineOfCode, SH.MET.RatioComment, SH.MET.Nesting, SH.MET.ComplexitySimplified) #30
  • Automated build #1

Bug fixes & enhancements

  • Analysis performances improvements  #14
  • User Interface preference page improvements  #36
  • Improvements of analysis failure notifications #50
  • XML and CSV export improvements #69  #19

Minor fixes and other enhancements : milestone 3.0.0.

Previous Releases

Feedback and Support

Contact : L-lequal@cnes.fr

Bugs and feature requests: https://github.com/lequal/i-CodeCNES/issues

How to contribute

If you experienced a problem with the plugin please open an issue. Inside this issue please explain us how to reproduce this issue and paste the log.

If you want to do a PR, please put inside of it the reason of this pull request. If this pull request fix an issue please insert the number of the issue or explain inside of the PR how to reproduce this issue.

License

Copyright 2019 LEQUAL.

This software is licensed under the terms in the file named "LICENSE" in this directory.

The software used Java files, generated with JFlex (http://.jflex.de). The terms of this library license are available here after : http://jflex.de/copying.html

Author: cnescatlab
Source Code: https://github.com/cnescatlab/i-CodeCNES
License: EPL-1.0 License

#fortran 

Aisu  Joesph

Aisu Joesph

1626490533

Azure Series #2: Single Server Deployment (Output)

No organization that is on the growth path or intending to have a more customer base and new entry into the market will restrict its infrastructure and design for one Database option. There are two levels of Database selection

  • a.  **The needs assessment **
  • **b. Selecting the kind of database **
  • c. Selection of Queues for communication
  • d. Selecting the technology player

Options to choose from:

  1. Transactional Databases:
    • Azure selection — Data Factory, Redis, CosmosDB, Azure SQL, Postgres SQL, MySQL, MariaDB, SQL Database, Maria DB, Managed Server
  2. Data warehousing:
    • Azure selection — CosmosDB
    • Delta Lake — Data Brick’s Lakehouse Architecture.
  3. Non-Relational Database:
  4. _- _Azure selection — CosmosDB
  5. Data Lake:
    • Azure Data Lake
    • Delta Lake — Data Bricks.
  6. Big Data and Analytics:
    • Data Bricks
    • Azure — HDInsights, Azure Synapse Analytics, Event Hubs, Data Lake Storage gen1, Azure Data Explorer Clusters, Data Factories, Azure Data Bricks, Analytics Services, Stream Analytics, Website UI, Cognitive Search, PowerBI, Queries, Reports.
  7. Machine Learning:
    • Azure — Azure Synapse Analytics, Machine Learning, Genomics accounts, Bot Services, Machine Learning Studio, Cognitive Services, Bonsai.

Key Data platform services would like to highlight

  • 1. Azure Data Factory (ADF)
  • 2. Azure Synapse Analytics
  • 3. Azure Stream Analytics
  • 4. Azure Databricks
  • 5. Azure Cognitive Services
  • 6. Azure Data Lake Storage
  • 7. Azure HDInsight
  • 8. Azure CosmosDB
  • 9. Azure SQL Database

#azure-databricks #azure #microsoft-azure-analytics #azure-data-factory #azure series

Eric  Bukenya

Eric Bukenya

1624713540

Learn NoSQL in Azure: Diving Deeper into Azure Cosmos DB

This article is a part of the series – Learn NoSQL in Azure where we explore Azure Cosmos DB as a part of the non-relational database system used widely for a variety of applications. Azure Cosmos DB is a part of Microsoft’s serverless databases on Azure which is highly scalable and distributed across all locations that run on Azure. It is offered as a platform as a service (PAAS) from Azure and you can develop databases that have a very high throughput and very low latency. Using Azure Cosmos DB, customers can replicate their data across multiple locations across the globe and also across multiple locations within the same region. This makes Cosmos DB a highly available database service with almost 99.999% availability for reads and writes for multi-region modes and almost 99.99% availability for single-region modes.

In this article, we will focus more on how Azure Cosmos DB works behind the scenes and how can you get started with it using the Azure Portal. We will also explore how Cosmos DB is priced and understand the pricing model in detail.

How Azure Cosmos DB works

As already mentioned, Azure Cosmos DB is a multi-modal NoSQL database service that is geographically distributed across multiple Azure locations. This helps customers to deploy the databases across multiple locations around the globe. This is beneficial as it helps to reduce the read latency when the users use the application.

As you can see in the figure above, Azure Cosmos DB is distributed across the globe. Let’s suppose you have a web application that is hosted in India. In that case, the NoSQL database in India will be considered as the master database for writes and all the other databases can be considered as a read replicas. Whenever new data is generated, it is written to the database in India first and then it is synchronized with the other databases.

Consistency Levels

While maintaining data over multiple regions, the most common challenge is the latency as when the data is made available to the other databases. For example, when data is written to the database in India, users from India will be able to see that data sooner than users from the US. This is due to the latency in synchronization between the two regions. In order to overcome this, there are a few modes that customers can choose from and define how often or how soon they want their data to be made available in the other regions. Azure Cosmos DB offers five levels of consistency which are as follows:

  • Strong
  • Bounded staleness
  • Session
  • Consistent prefix
  • Eventual

In most common NoSQL databases, there are only two levels – Strong and EventualStrong being the most consistent level while Eventual is the least. However, as we move from Strong to Eventual, consistency decreases but availability and throughput increase. This is a trade-off that customers need to decide based on the criticality of their applications. If you want to read in more detail about the consistency levels, the official guide from Microsoft is the easiest to understand. You can refer to it here.

Azure Cosmos DB Pricing Model

Now that we have some idea about working with the NoSQL database – Azure Cosmos DB on Azure, let us try to understand how the database is priced. In order to work with any cloud-based services, it is essential that you have a sound knowledge of how the services are charged, otherwise, you might end up paying something much higher than your expectations.

If you browse to the pricing page of Azure Cosmos DB, you can see that there are two modes in which the database services are billed.

  • Database Operations – Whenever you execute or run queries against your NoSQL database, there are some resources being used. Azure terms these usages in terms of Request Units or RU. The amount of RU consumed per second is aggregated and billed
  • Consumed Storage – As you start storing data in your database, it will take up some space in order to store that data. This storage is billed per the standard SSD-based storage across any Azure locations globally

Let’s learn about this in more detail.

#azure #azure cosmos db #nosql #azure #nosql in azure #azure cosmos db