Trystan  Doyle

Trystan Doyle

1592496511

NDepend, a powerful static analysis tool for .NET | Awaiting Bits

Zanid Haytam’s personal blog about Programming, Data Science and random stuff.

#ndepend #.net #bits #programming

What is GEEK

Buddha Community

NDepend, a powerful static analysis tool for .NET | Awaiting Bits
Tyrique  Littel

Tyrique Littel

1604008800

Static Code Analysis: What It Is? How to Use It?

Static code analysis refers to the technique of approximating the runtime behavior of a program. In other words, it is the process of predicting the output of a program without actually executing it.

Lately, however, the term “Static Code Analysis” is more commonly used to refer to one of the applications of this technique rather than the technique itself — program comprehension — understanding the program and detecting issues in it (anything from syntax errors to type mismatches, performance hogs likely bugs, security loopholes, etc.). This is the usage we’d be referring to throughout this post.

“The refinement of techniques for the prompt discovery of error serves as well as any other as a hallmark of what we mean by science.”

  • J. Robert Oppenheimer

Outline

We cover a lot of ground in this post. The aim is to build an understanding of static code analysis and to equip you with the basic theory, and the right tools so that you can write analyzers on your own.

We start our journey with laying down the essential parts of the pipeline which a compiler follows to understand what a piece of code does. We learn where to tap points in this pipeline to plug in our analyzers and extract meaningful information. In the latter half, we get our feet wet, and write four such static analyzers, completely from scratch, in Python.

Note that although the ideas here are discussed in light of Python, static code analyzers across all programming languages are carved out along similar lines. We chose Python because of the availability of an easy to use ast module, and wide adoption of the language itself.

How does it all work?

Before a computer can finally “understand” and execute a piece of code, it goes through a series of complicated transformations:

static analysis workflow

As you can see in the diagram (go ahead, zoom it!), the static analyzers feed on the output of these stages. To be able to better understand the static analysis techniques, let’s look at each of these steps in some more detail:

Scanning

The first thing that a compiler does when trying to understand a piece of code is to break it down into smaller chunks, also known as tokens. Tokens are akin to what words are in a language.

A token might consist of either a single character, like (, or literals (like integers, strings, e.g., 7Bob, etc.), or reserved keywords of that language (e.g, def in Python). Characters which do not contribute towards the semantics of a program, like trailing whitespace, comments, etc. are often discarded by the scanner.

Python provides the tokenize module in its standard library to let you play around with tokens:

Python

1

import io

2

import tokenize

3

4

code = b"color = input('Enter your favourite color: ')"

5

6

for token in tokenize.tokenize(io.BytesIO(code).readline):

7

    print(token)

Python

1

TokenInfo(type=62 (ENCODING),  string='utf-8')

2

TokenInfo(type=1  (NAME),      string='color')

3

TokenInfo(type=54 (OP),        string='=')

4

TokenInfo(type=1  (NAME),      string='input')

5

TokenInfo(type=54 (OP),        string='(')

6

TokenInfo(type=3  (STRING),    string="'Enter your favourite color: '")

7

TokenInfo(type=54 (OP),        string=')')

8

TokenInfo(type=4  (NEWLINE),   string='')

9

TokenInfo(type=0  (ENDMARKER), string='')

(Note that for the sake of readability, I’ve omitted a few columns from the result above — metadata like starting index, ending index, a copy of the line on which a token occurs, etc.)

#code quality #code review #static analysis #static code analysis #code analysis #static analysis tools #code review tips #static code analyzer #static code analysis tool #static analyzer

Einar  Hintz

Einar Hintz

1602560783

jQuery Ajax CRUD in ASP.NET Core MVC with Modal Popup

In this article, we’ll discuss how to use jQuery Ajax for ASP.NET Core MVC CRUD Operations using Bootstrap Modal. With jQuery Ajax, we can make HTTP request to controller action methods without reloading the entire page, like a single page application.

To demonstrate CRUD operations – insert, update, delete and retrieve, the project will be dealing with details of a normal bank transaction. GitHub repository for this demo project : https://bit.ly/33KTJAu.

Sub-topics discussed :

  • Form design for insert and update operation.
  • Display forms in modal popup dialog.
  • Form post using jQuery Ajax.
  • Implement MVC CRUD operations with jQuery Ajax.
  • Loading spinner in .NET Core MVC.
  • Prevent direct access to MVC action method.

Create ASP.NET Core MVC Project

In Visual Studio 2019, Go to File > New > Project (Ctrl + Shift + N).

From new project window, Select Asp.Net Core Web Application_._

Image showing how to create ASP.NET Core Web API project in Visual Studio.

Once you provide the project name and location. Select Web Application(Model-View-Controller) and uncheck HTTPS Configuration. Above steps will create a brand new ASP.NET Core MVC project.

Showing project template selection for .NET Core MVC.

Setup a Database

Let’s create a database for this application using Entity Framework Core. For that we’ve to install corresponding NuGet Packages. Right click on project from solution explorer, select Manage NuGet Packages_,_ From browse tab, install following 3 packages.

Showing list of NuGet Packages for Entity Framework Core

Now let’s define DB model class file – /Models/TransactionModel.cs.

public class TransactionModel
{
    [Key]
    public int TransactionId { get; set; }

    [Column(TypeName ="nvarchar(12)")]
    [DisplayName("Account Number")]
    [Required(ErrorMessage ="This Field is required.")]
    [MaxLength(12,ErrorMessage ="Maximum 12 characters only")]
    public string AccountNumber { get; set; }

    [Column(TypeName ="nvarchar(100)")]
    [DisplayName("Beneficiary Name")]
    [Required(ErrorMessage = "This Field is required.")]
    public string BeneficiaryName { get; set; }

    [Column(TypeName ="nvarchar(100)")]
    [DisplayName("Bank Name")]
    [Required(ErrorMessage = "This Field is required.")]
    public string BankName { get; set; }

    [Column(TypeName ="nvarchar(11)")]
    [DisplayName("SWIFT Code")]
    [Required(ErrorMessage = "This Field is required.")]
    [MaxLength(11)]
    public string SWIFTCode { get; set; }

    [DisplayName("Amount")]
    [Required(ErrorMessage = "This Field is required.")]
    public int Amount { get; set; }

    [DisplayFormat(DataFormatString = "{0:MM/dd/yyyy}")]
    public DateTime Date { get; set; }
}

C#Copy

Here we’ve defined model properties for the transaction with proper validation. Now let’s define  DbContextclass for EF Core.

#asp.net core article #asp.net core #add loading spinner in asp.net core #asp.net core crud without reloading #asp.net core jquery ajax form #asp.net core modal dialog #asp.net core mvc crud using jquery ajax #asp.net core mvc with jquery and ajax #asp.net core popup window #bootstrap modal popup in asp.net core mvc. bootstrap modal popup in asp.net core #delete and viewall in asp.net core #jquery ajax - insert #jquery ajax form post #modal popup dialog in asp.net core #no direct access action method #update #validation in modal popup

Trystan  Doyle

Trystan Doyle

1592496511

NDepend, a powerful static analysis tool for .NET | Awaiting Bits

Zanid Haytam’s personal blog about Programming, Data Science and random stuff.

#ndepend #.net #bits #programming

Comparing Power BI with other tools

the Business Intelligence (BI) world has been moving towards self-service BI. As expected, several vendors created tools empowering regular users to gain insights from their data. Among the many, there is Power BI. Nowadays, users want to understand the differences between Product X and Power BI.

This is image title

One of the most common questions in conferences and user group sessions is likely, “can you provide a comparison between this product and Power BI?”.

The answer is almost always, “No, I cannot compare them, because they are too different”. First, one needs to understand the deep difference between Power BI and most other reporting tools on the market. Only later does a comparison make any sense. As a matter of fact, I think Power BI can be compared to only a few products on the market today. I would like to add my point of view to the discussion.

To get in-Depth knowledge on Power BI you can enroll for a live demo on Power BI online training

Indeed, Power BI is a tremendously powerful data modeling tool that happens to come with a pretty face; most other products are beautifully crafted reporting tools with a pretty face. The only thing they have in common is the pretty face. If you stop at what they have in common, you are only comparing a small fraction of the whole product, and that would be unfair.

To go further, a deeper understanding of basic BI concepts is needed.

Beware: this article is biased. I love Power BI and I make my living out of it. Nevertheless, I am a BI professional; I started working with Business Intelligence many years ago and I have gathered experience that I can share. I will try to be as fair as I can in this post, as my goal is not to provide a comparison with any tool. The goal of this post is to help you understand what you really need to evaluate when making (or reading) any comparison between different BI products.

At the top level, any Business Intelligence solution is composed of three layers:

Raw data: these are the data sources that one wants to analyze. Raw data comes as is.
Semantic model: this is where data is re-arranged to optimize it for analysis. Here you also define the calculations required by the reports.
Reports: these are the nice dashboards you can build with the tool.

Power BI manages all three layers: you start from raw data, you can build a semantic layer, and finally you prepare reports. Most other reporting tools are focused on the last layer and are limited in the previous two. In other words, they are missing the capability to build a real semantic layer. It is important to clarify what a semantic layer is, to understand what you would miss by choosing a different product.

In the old ages of BI, there was a clear separation between users and developers. A BI developer would build a project to help users extract insights from their data, and build reports. Users did not need to understand tables, relationships, or calculations. The developer oversaw shaping the tables, providing predefined calculations and giving sensible names to entities. Leveraging the semantic model, users did not have to know DAX, MDX or SQL.

A semantic model lets users interact with business entities like customers, sales, and products. Users would place those entities in reports made with Excel or with other reporting tools. Regular users were happy with just Excel and a Pivot Table. More advanced users wanted more powerful tools, and this led to the creation of several reporting tools with their ad-hoc programming language to create more advanced formulas. Regardless, the important thing is that no matter how powerful those tools are, they were still reporting tools based on the existence of a previously crafted semantic model. No semantic model, no reporting.

Picture this: a BI tool lets a developer build a semantic model. A reporting tool lets a user build a report on top of an existing semantic model. You need both to create a BI solution.Learn more from Power BI online course

Unfortunately, building a BI project takes time. Users were hungry for reports. This led to the start of the Self-Service BI era. Self-service BI is the idea of users building reports themselves, to reduce development time and to build a democratic knowledge about data. Sounds cool and terrifying at the same time.

Anyway, this is where we are today. Obviously, driven by the market several vendors started to build self-service BI tools. A few new products appeared on the market. Rather, existing tools evolved into new ones, targeting self-service BI. Keep in mind: any self-service BI tool requires the functionalities to build both the semantic model and the report in the same tool. Thus, depending on where you start, you have two options to have an existing product evolve into a self-service product:

If you already have a semantic model tool, you need to add reporting capabilities. You need to make it easier to use, because the target is no longer a BI professional but a regular user instead.
If you already have a reporting tool, you need to add the capability to build a semantic model because your users need to massage the data and build calculations on top of the resulting model.

In both cases, in the end you obtain a tool that mixes the capabilities to create a semantic model and to build reports. After this first step, you can add tons of different features like sharing with other users, building wizards to automatically connect to other services, improving the formula language and so on. But the core is always the same: a semantic model and a reporting tool, bound together in a nice package.

Even though we consider Power BI to be a new product, it is actually the evolution of Power Pivot and Analysis Services Tabular (semantic model), Power Query (querying tool), and Power View (the first version of the reporting tool released with Excel and SharePoint). Other vendors took similar steps, with different starting points. It is fair to say that several vendors started from a reporting tool, adding the semantic model to it.

Now, if you need to compare two BI tools, you need to compare at least these two features: the semantic model and the tools to build a report.

Say you want to compare Product X with Power BI; you show me how easy it is to build a gorgeous report on top of an SQL view, much easier and much more powerful than Power BI. Cool, but you are only comparing a fraction of both products. Reporting-wise, sure, Product X is better than Power BI. But there are other considerations: can you load multiple tables in Product X? Can you build relationships between them? Can you use a programming language to author complex calculations that involve scanning different tables? All these operations belong to the semantic model. A fair comparison needs to apply to all the features.

This is what Power BI offers you:

Power Query – a data transformation tool which is easy to use and yet incredibly powerful. It can load virtually anything and join data from different sources.
A modeling environment where you can build different kinds of relationships between tables and build powerful models. It does not hurt that it runs on top of one of the fastest databases I have ever seen.
DAX – a programming language which is not easy, but lets you author nearly any query and calculation. Yes, on this I am biased for sure!
Power BI – a reporting engine which is very good in building dashboards and reports. It can also be extended with custom visuals and third-party products.

Then, there is web-based reporting and sharing, a mobile experience, the ability to load from nearly any data source in the cloud or on premises and many other useful features. Yet, the core is composed of the four features above. If you want to compare apples to apples, you need to compare at least these four parts. Be mindful: you need all of them. A tool that requires you to build a single table because it does not let you relate two tables is nothing but a nice reporting tool. Comparing it to Power BI does not make much sense to me.

Moreover, it does not come by chance that to learn Power BI, one needs to learn new programming languages. Each feature has its own language, and this is just the right thing to have.

Finally, reporting. Reporting is only the last part, even though it is the most visible one. You might find other products are better than Power BI when it comes to reporting. This is fine, if you are aware that you are only comparing a fraction of Power BI with the whole of Product X.

I love Power BI, and I would really love to see a fair comparison between Power BI and any other product. We could learn a lot from the topic. But for it to be fair, it cannot just be based on how easy it is to build a pie chart (just kidding! You are not using a pie chart, are you?). One needs to evaluate everything both products have to offer.

To get in-depth knowledge of this technology and to develop skills to make a great career in this regard one can opt for Power BI online training Hyderabad.

#power bi training #power bi course #learn power bi #microsoft power bi training #power bi online training #power bi online course

Tyrique  Littel

Tyrique Littel

1604023200

Effective Code Reviews: A Primer

Peer code reviews as a process have increasingly been adopted by engineering teams around the world. And for good reason — code reviews have been proven to improve software quality and save developers’ time in the long run. A lot has been written about how code reviews help engineering teams by leading software engineering practitioners. My favorite is this quote by Karl Wiegers, author of the seminal paper on this topic, Humanizing Peer Reviews:

Peer review – an activity in which people other than the author of a software deliverable examine it for defects and improvement opportunities – is one of the most powerful software quality tools available. Peer review methods include inspections, walkthroughs, peer deskchecks, and other similar activities. After experiencing the benefits of peer reviews for nearly fifteen years, I would never work in a team that did not perform them.

It is worth the time and effort to put together a code review strategy and consistently follow it in the team. In essence, this has a two-pronged benefit: more pair of eyes looking at the code decreases the chances of bugs and bad design patterns entering your codebase, and embracing the process fosters knowledge sharing and positive collaboration culture in the team.

Here are 6 tips to ensure effective peer reviews in your team.

1. Keep the Changes Small and Focused

Code reviews require developers to look at someone else’s code, most of which is completely new most of the times. Too many lines of code to review at once requires a huge amount of cognitive effort, and the quality of review diminishes as the size of changes increases. While there’s no golden number of LOCs, it is recommended to create small pull-requests which can be managed easily. If there are a lot of changes going in a release, it is better to chunk it down into a number of small pull-requests.

2. Ensure Logical Coherence of Changes

Code reviews are the most effective when the changes are focused and have logical coherence. When doing refactoring, refrain from making behavioral changes. Similarly, behavioral changes should not include refactoring and style violation fixes. Following this convention prevents unintended changes creeping in unnoticed in the code base.

3. Have Automated Tests, and Track Coverage

Automated tests of your preferred flavor — units, integration tests, end-to-end tests, etc. help automatically ensure correctness. Consistently ensuring that changes proposed are covered by some kind of automated frees up time for more qualitative review; allowing for a more insightful and in-depth conversation on deeper issues.

4. Self-Review Changes Before Submitting for Peer Review

A change can implement a new feature or fix an existing issue. It is recommended that the requester submits only those changes that are complete, and tested for correctness manually. Before creating the pull-request, a quick glance on what changes are being proposed helps ensure that no extraneous files are added in the changeset. This saves tons of time for the reviewers.

5. Automate What Can Be Automated

Human review time is expensive, and the best use of a developer’s time is reviewing qualitative aspects of code — logic, design patterns, software architecture, and so on. Linting tools can help automatically take care of style and formatting conventions. Continuous Quality tools can help catch potential bugs, anti-patterns and security issues which can be fixed by the developer before they make a change request. Most of these tools integrate well with code hosting platforms as well.

6. Be Positive, Polite, and Respectful

Finally, be cognizant of the fact that people on both sides of the review are but human. Offer positive feedback, and accept criticism humbly. Instead of beating oneself upon the literal meaning of words, it really pays off to look at reviews as people trying to achieve what’s best for the team, albeit in possibly different ways. Being cognizant of this aspect can save a lot of resentment and unmitigated negativity.

#agile #code quality #code review #static analysis #code analysis #code reviews #static analysis tools #code review tips #continuous quality #static analyzer