1604023200
Peer code reviews as a process have increasingly been adopted by engineering teams around the world. And for good reason — code reviews have been proven to improve software quality and save developers’ time in the long run. A lot has been written about how code reviews help engineering teams by leading software engineering practitioners. My favorite is this quote by Karl Wiegers, author of the seminal paper on this topic, Humanizing Peer Reviews:
Peer review – an activity in which people other than the author of a software deliverable examine it for defects and improvement opportunities – is one of the most powerful software quality tools available. Peer review methods include inspections, walkthroughs, peer deskchecks, and other similar activities. After experiencing the benefits of peer reviews for nearly fifteen years, I would never work in a team that did not perform them.
It is worth the time and effort to put together a code review strategy and consistently follow it in the team. In essence, this has a two-pronged benefit: more pair of eyes looking at the code decreases the chances of bugs and bad design patterns entering your codebase, and embracing the process fosters knowledge sharing and positive collaboration culture in the team.
Here are 6 tips to ensure effective peer reviews in your team.
Code reviews require developers to look at someone else’s code, most of which is completely new most of the times. Too many lines of code to review at once requires a huge amount of cognitive effort, and the quality of review diminishes as the size of changes increases. While there’s no golden number of LOCs, it is recommended to create small pull-requests which can be managed easily. If there are a lot of changes going in a release, it is better to chunk it down into a number of small pull-requests.
Code reviews are the most effective when the changes are focused and have logical coherence. When doing refactoring, refrain from making behavioral changes. Similarly, behavioral changes should not include refactoring and style violation fixes. Following this convention prevents unintended changes creeping in unnoticed in the code base.
Automated tests of your preferred flavor — units, integration tests, end-to-end tests, etc. help automatically ensure correctness. Consistently ensuring that changes proposed are covered by some kind of automated frees up time for more qualitative review; allowing for a more insightful and in-depth conversation on deeper issues.
A change can implement a new feature or fix an existing issue. It is recommended that the requester submits only those changes that are complete, and tested for correctness manually. Before creating the pull-request, a quick glance on what changes are being proposed helps ensure that no extraneous files are added in the changeset. This saves tons of time for the reviewers.
Human review time is expensive, and the best use of a developer’s time is reviewing qualitative aspects of code — logic, design patterns, software architecture, and so on. Linting tools can help automatically take care of style and formatting conventions. Continuous Quality tools can help catch potential bugs, anti-patterns and security issues which can be fixed by the developer before they make a change request. Most of these tools integrate well with code hosting platforms as well.
Finally, be cognizant of the fact that people on both sides of the review are but human. Offer positive feedback, and accept criticism humbly. Instead of beating oneself upon the literal meaning of words, it really pays off to look at reviews as people trying to achieve what’s best for the team, albeit in possibly different ways. Being cognizant of this aspect can save a lot of resentment and unmitigated negativity.
#agile #code quality #code review #static analysis #code analysis #code reviews #static analysis tools #code review tips #continuous quality #static analyzer
1604023200
Peer code reviews as a process have increasingly been adopted by engineering teams around the world. And for good reason — code reviews have been proven to improve software quality and save developers’ time in the long run. A lot has been written about how code reviews help engineering teams by leading software engineering practitioners. My favorite is this quote by Karl Wiegers, author of the seminal paper on this topic, Humanizing Peer Reviews:
Peer review – an activity in which people other than the author of a software deliverable examine it for defects and improvement opportunities – is one of the most powerful software quality tools available. Peer review methods include inspections, walkthroughs, peer deskchecks, and other similar activities. After experiencing the benefits of peer reviews for nearly fifteen years, I would never work in a team that did not perform them.
It is worth the time and effort to put together a code review strategy and consistently follow it in the team. In essence, this has a two-pronged benefit: more pair of eyes looking at the code decreases the chances of bugs and bad design patterns entering your codebase, and embracing the process fosters knowledge sharing and positive collaboration culture in the team.
Here are 6 tips to ensure effective peer reviews in your team.
Code reviews require developers to look at someone else’s code, most of which is completely new most of the times. Too many lines of code to review at once requires a huge amount of cognitive effort, and the quality of review diminishes as the size of changes increases. While there’s no golden number of LOCs, it is recommended to create small pull-requests which can be managed easily. If there are a lot of changes going in a release, it is better to chunk it down into a number of small pull-requests.
Code reviews are the most effective when the changes are focused and have logical coherence. When doing refactoring, refrain from making behavioral changes. Similarly, behavioral changes should not include refactoring and style violation fixes. Following this convention prevents unintended changes creeping in unnoticed in the code base.
Automated tests of your preferred flavor — units, integration tests, end-to-end tests, etc. help automatically ensure correctness. Consistently ensuring that changes proposed are covered by some kind of automated frees up time for more qualitative review; allowing for a more insightful and in-depth conversation on deeper issues.
A change can implement a new feature or fix an existing issue. It is recommended that the requester submits only those changes that are complete, and tested for correctness manually. Before creating the pull-request, a quick glance on what changes are being proposed helps ensure that no extraneous files are added in the changeset. This saves tons of time for the reviewers.
Human review time is expensive, and the best use of a developer’s time is reviewing qualitative aspects of code — logic, design patterns, software architecture, and so on. Linting tools can help automatically take care of style and formatting conventions. Continuous Quality tools can help catch potential bugs, anti-patterns and security issues which can be fixed by the developer before they make a change request. Most of these tools integrate well with code hosting platforms as well.
Finally, be cognizant of the fact that people on both sides of the review are but human. Offer positive feedback, and accept criticism humbly. Instead of beating oneself upon the literal meaning of words, it really pays off to look at reviews as people trying to achieve what’s best for the team, albeit in possibly different ways. Being cognizant of this aspect can save a lot of resentment and unmitigated negativity.
#agile #code quality #code review #static analysis #code analysis #code reviews #static analysis tools #code review tips #continuous quality #static analyzer
1604048400
The story of Softagram is a long one and has many twists. Everything started in a small company long time ago, from the area of static analysis tools development. After many phases, Softagram is focusing on helping developers to get visual feedback on the code change: how is the software design evolving in the pull request under review.
While it is trivial to write 20 KLOC apps without help of tooling, usually things start getting complicated when the system grows over 100 KLOC.
The risk of god class anti-pattern, and the risk of mixing up with the responsibilities are increasing exponentially while the software grows larger.
To help with that, software evolution can be tracked safely with explicit dependency change reports provided automatically to each pull request. Blocking bad PR becomes easy, and having visual reports also has a democratizing effect on code review.
Architectural analysis of the code, identifying how delta is impacting to the code base. Language specific analyzers are able to extract the essential internal/external dependency structures from each of the mainstream programming languages.
Checking for rule violations or anomalies in the delta, e.g. finding out cyclical dependencies. Graph theory comes to big help when finding out unwanted or weird dependencies.
Building visualization for humans. Complex structures such as software is not easy to represent without help of graph visualization. Here comes the vital role of change graph visualization technology developed within the last few years.
#automated-code-review #code-review-automation #code-reviews #devsecops #software-development #code-review #coding #good-company
1621137960
Having another pair of eyes scan your code is always useful and helps you spot mistakes before you break production. You need not be an expert to review someone’s code. Some experience with the programming language and a review checklist should help you get started. We’ve put together a list of things you should keep in mind when you’re reviewing Java code. Read on!
NullPointerException
…
#java #code quality #java tutorial #code analysis #code reviews #code review tips #code analysis tools #java tutorial for beginners #java code review
1604008800
Static code analysis refers to the technique of approximating the runtime behavior of a program. In other words, it is the process of predicting the output of a program without actually executing it.
Lately, however, the term “Static Code Analysis” is more commonly used to refer to one of the applications of this technique rather than the technique itself — program comprehension — understanding the program and detecting issues in it (anything from syntax errors to type mismatches, performance hogs likely bugs, security loopholes, etc.). This is the usage we’d be referring to throughout this post.
“The refinement of techniques for the prompt discovery of error serves as well as any other as a hallmark of what we mean by science.”
We cover a lot of ground in this post. The aim is to build an understanding of static code analysis and to equip you with the basic theory, and the right tools so that you can write analyzers on your own.
We start our journey with laying down the essential parts of the pipeline which a compiler follows to understand what a piece of code does. We learn where to tap points in this pipeline to plug in our analyzers and extract meaningful information. In the latter half, we get our feet wet, and write four such static analyzers, completely from scratch, in Python.
Note that although the ideas here are discussed in light of Python, static code analyzers across all programming languages are carved out along similar lines. We chose Python because of the availability of an easy to use ast
module, and wide adoption of the language itself.
Before a computer can finally “understand” and execute a piece of code, it goes through a series of complicated transformations:
As you can see in the diagram (go ahead, zoom it!), the static analyzers feed on the output of these stages. To be able to better understand the static analysis techniques, let’s look at each of these steps in some more detail:
The first thing that a compiler does when trying to understand a piece of code is to break it down into smaller chunks, also known as tokens. Tokens are akin to what words are in a language.
A token might consist of either a single character, like (
, or literals (like integers, strings, e.g., 7
, Bob
, etc.), or reserved keywords of that language (e.g, def
in Python). Characters which do not contribute towards the semantics of a program, like trailing whitespace, comments, etc. are often discarded by the scanner.
Python provides the tokenize
module in its standard library to let you play around with tokens:
Python
1
import io
2
import tokenize
3
4
code = b"color = input('Enter your favourite color: ')"
5
6
for token in tokenize.tokenize(io.BytesIO(code).readline):
7
print(token)
Python
1
TokenInfo(type=62 (ENCODING), string='utf-8')
2
TokenInfo(type=1 (NAME), string='color')
3
TokenInfo(type=54 (OP), string='=')
4
TokenInfo(type=1 (NAME), string='input')
5
TokenInfo(type=54 (OP), string='(')
6
TokenInfo(type=3 (STRING), string="'Enter your favourite color: '")
7
TokenInfo(type=54 (OP), string=')')
8
TokenInfo(type=4 (NEWLINE), string='')
9
TokenInfo(type=0 (ENDMARKER), string='')
(Note that for the sake of readability, I’ve omitted a few columns from the result above — metadata like starting index, ending index, a copy of the line on which a token occurs, etc.)
#code quality #code review #static analysis #static code analysis #code analysis #static analysis tools #code review tips #static code analyzer #static code analysis tool #static analyzer
1626984360
with
to Open FilesWhen you open a file without the with
statement, you need to remember closing the file via calling close()
explicitly when finished with processing it. Even while explicitly closing the resource, there are chances of exceptions before the resource is actually released. This can cause inconsistencies, or lead the file to be corrupted. Opening a file via with
implements the context manager protocol that releases the resource when execution is outside of the with
block.
list
/dict
/set
Comprehension Unnecessarilyget()
to Return Default Values From a Dictionary…
#code reviews #python programming #debugger #code review tips #python coding #python code #code debugging