1598137560
Two months ago, we launched the analyzer for the Go programming language in a public beta. Since then, prominent open-source projects like Dgraph, Gauge, and many others have adopted DeepSource to enable continuous quality and security on their code. With features and fixes based on the feedback we received from early adopters, we are pleased to announce that the DeepSource’s Go analyzer is now generally available. Read on to know all the details.
DeepSource installs the 3rd party dependencies listed in your project during the analysis to get a full view of your application. Up until this release, we supported go modules as the only way to install them, and if we detected some other package manager, we’d try to move it to go modules (using the go mod init <import path>
command). While this approach worked for many repositories, it did not for others. One such case was when a dependency of the analyzed repository had versioned packages (package.domain.tld/v1
), but the repository itself was using the unversioned package (the “zeroth” version).
To analyze such packages better, we now detect the package manager used for the repository and use it to install the dependencies. We now support 10 package managers, details about which can be found here.
We now detect 21 new issues in your Go code, which brings the total number of issues raised to 154, which we have categorized into 72 bug risk issues, 41 anti-patterns, 19 security issues, 11 style issues and 11 performance issues. Of these 21 new issues, 12 prevent bug risks, 4 detect anti-patterns, and 5 of them help improve performance. Let’s look at some of the new issues:
#go
1599854400
Go announced Go 1.15 version on 11 Aug 2020. Highlighted updates and features include Substantial improvements to the Go linker, Improved allocation for small objects at high core counts, X.509 CommonName deprecation, GOPROXY supports skipping proxies that return errors, New embedded tzdata package, Several Core Library improvements and more.
As Go promise for maintaining backward compatibility. After upgrading to the latest Go 1.15 version, almost all existing Golang applications or programs continue to compile and run as older Golang version.
#go #golang #go 1.15 #go features #go improvement #go package #go new features
1598137560
Two months ago, we launched the analyzer for the Go programming language in a public beta. Since then, prominent open-source projects like Dgraph, Gauge, and many others have adopted DeepSource to enable continuous quality and security on their code. With features and fixes based on the feedback we received from early adopters, we are pleased to announce that the DeepSource’s Go analyzer is now generally available. Read on to know all the details.
DeepSource installs the 3rd party dependencies listed in your project during the analysis to get a full view of your application. Up until this release, we supported go modules as the only way to install them, and if we detected some other package manager, we’d try to move it to go modules (using the go mod init <import path>
command). While this approach worked for many repositories, it did not for others. One such case was when a dependency of the analyzed repository had versioned packages (package.domain.tld/v1
), but the repository itself was using the unversioned package (the “zeroth” version).
To analyze such packages better, we now detect the package manager used for the repository and use it to install the dependencies. We now support 10 package managers, details about which can be found here.
We now detect 21 new issues in your Go code, which brings the total number of issues raised to 154, which we have categorized into 72 bug risk issues, 41 anti-patterns, 19 security issues, 11 style issues and 11 performance issues. Of these 21 new issues, 12 prevent bug risks, 4 detect anti-patterns, and 5 of them help improve performance. Let’s look at some of the new issues:
#go
1629730088
api_manager .A simple flutter API to manage rest api request easily with the help of flutter dio.
dependencies:
api_manager: $latest_version
import 'package:api_manager/api_manager.dart';
void main() async {
ApiResponse response = await ApiManager().request(
requestType: RequestType.GET,
route: "your route",
);
print(response);
}
class ApiRepository {
static final ApiRepository _instance = ApiRepository._internal(); /// singleton api repository
ApiManager _apiManager;
factory ApiRepository() {
return _instance;
}
/// base configuration for api manager
ApiRepository._internal() {
_apiManager = ApiManager();
_apiManager.options.baseUrl = BASE_URL; /// EX: BASE_URL = https://google.com/api/v1
_apiManager.options.connectTimeout = 100000;
_apiManager.options.receiveTimeout = 100000;
_apiManager.enableLogging(responseBody: true, requestBody: false); /// enable api logging EX: response, request, headers etc
_apiManager.enableAuthTokenCheck(() => "access_token"); /// EX: JWT/PASSPORT auth token store in cache
}
}
Suppose we have a response model like this:
class SampleResponse{
String name;
int id;
SampleResponse.fromJson(jsonMap):
this.name = jsonMap['name'],
this.id = jsonMap['id'];
}
and actual api response json structure is:
{
"data": {
"name": "md afratul kaoser taohid",
"id": "id"
}
}
#Now we Performing a GET request :
Future<ApiResponse<SampleResponse>> getRequestSample() async =>
await _apiManager.request<SampleResponse>(
requestType: RequestType.GET,
route: 'api_route',
requestParams: {"userId": 12}, /// add params if required
isAuthRequired: true, /// by set it to true, this request add a header authorization from this method enableAuthTokenCheck();
responseBodySerializer: (jsonMap) {
return SampleResponse.fromJson(jsonMap); /// parse the json response into dart model class
},
);
#Now we Performing a POST request :
Future<ApiResponse<SampleResponse>> postRequestSample() async =>
await _apiManager.request<SampleResponse>(
requestType: RequestType.POST,
route: 'api_route',
requestBody: {"userId": 12}, /// add POST request body
isAuthRequired: true, /// by set it to true, this request add a header authorization from this method enableAuthTokenCheck();
responseBodySerializer: (jsonMap) {
return SampleResponse.fromJson(jsonMap); /// parse the json response into dart model class
},
);
#Now er performing a multipart file upload request :
Future<ApiResponse<void>> updateProfilePicture(
String filePath,
) async {
MultipartFile multipartFile =
await _apiManager.getMultipartFileData(filePath);
FormData formData = FormData.fromMap({'picture': multipartFile});
return await _apiManager.request(
requestType: RequestType.POST,
isAuthRequired: true,
requestBody: formData,
route: 'api_route',
);
}
Run this command:
With Flutter:
$ flutter pub add api_manager
This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get):
dependencies:
api_manager: ^0.1.29
Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:api_manager/api_manager.dart';
//void main() async {
// ApiManager _apiManager = ApiManager();
// _apiManager.options.baseUrl = $base_url;
// _apiManager.responseBodyWrapper("data");
//
// ApiResponse<List<dynamic>> response = await _apiManager.request(
// requestType: RequestType.GET,
// route: $route,
// responseBodySerializer: (jsonMap) {
// return jsonMap as List;
// },
// );
// print(response);
//}
Download Details:
Author: afratul-taohid
Source Code: https://github.com/afratul-taohid/api_manager
1655625349
api_manager .A simple flutter API to manage rest api request easily with the help of flutter dio.
dependencies:
api_manager: $latest_version
import 'package:api_manager/api_manager.dart';
void main() async {
ApiResponse response = await ApiManager().request(
requestType: RequestType.GET,
route: "your route",
);
print(response);
}
class ApiRepository {
static final ApiRepository _instance = ApiRepository._internal(); /// singleton api repository
ApiManager _apiManager;
factory ApiRepository() {
return _instance;
}
/// base configuration for api manager
ApiRepository._internal() {
_apiManager = ApiManager();
_apiManager.options.baseUrl = BASE_URL; /// EX: BASE_URL = https://google.com/api/v1
_apiManager.options.connectTimeout = 100000;
_apiManager.options.receiveTimeout = 100000;
_apiManager.enableLogging(responseBody: true, requestBody: false); /// enable api logging EX: response, request, headers etc
_apiManager.enableAuthTokenCheck(() => "access_token"); /// EX: JWT/PASSPORT auth token store in cache
}
}
Suppose we have a response model like this:
class SampleResponse{
String name;
int id;
SampleResponse.fromJson(jsonMap):
this.name = jsonMap['name'],
this.id = jsonMap['id'];
}
and actual api response json structure is:
{
"data": {
"name": "md afratul kaoser taohid",
"id": "id"
}
}
#Now we Performing a GET request :
Future<ApiResponse<SampleResponse>> getRequestSample() async =>
await _apiManager.request<SampleResponse>(
requestType: RequestType.GET,
route: 'api_route',
requestParams: {"userId": 12}, /// add params if required
isAuthRequired: true, /// by set it to true, this request add a header authorization from this method enableAuthTokenCheck();
responseBodySerializer: (jsonMap) {
return SampleResponse.fromJson(jsonMap); /// parse the json response into dart model class
},
);
#Now we Performing a POST request :
Future<ApiResponse<SampleResponse>> postRequestSample() async =>
await _apiManager.request<SampleResponse>(
requestType: RequestType.POST,
route: 'api_route',
requestBody: {"userId": 12}, /// add POST request body
isAuthRequired: true, /// by set it to true, this request add a header authorization from this method enableAuthTokenCheck();
responseBodySerializer: (jsonMap) {
return SampleResponse.fromJson(jsonMap); /// parse the json response into dart model class
},
);
#Now er performing a multipart file upload request :
Future<ApiResponse<void>> updateProfilePicture(
String filePath,
) async {
MultipartFile multipartFile =
await _apiManager.getMultipartFileData(filePath);
FormData formData = FormData.fromMap({'picture': multipartFile});
return await _apiManager.request(
requestType: RequestType.POST,
isAuthRequired: true,
requestBody: formData,
route: 'api_route',
);
}
Run this command:
With Flutter:
$ flutter pub add api_manager
This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get):
dependencies:
api_manager: ^0.1.29
Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:api_manager/api_manager.dart';
//void main() async {
// ApiManager _apiManager = ApiManager();
// _apiManager.options.baseUrl = $base_url;
// _apiManager.responseBodyWrapper("data");
//
// ApiResponse<List<dynamic>> response = await _apiManager.request(
// requestType: RequestType.GET,
// route: $route,
// responseBodySerializer: (jsonMap) {
// return jsonMap as List;
// },
// );
// print(response);
//}
1650856507
SQL becomes one of the most important concepts to be learned in this field of Data Science. We'll discuss how SQL is essential for Data Science and also how we can work with SQL using Python.
Data Science is a most emerging field with numerous job opportunities. We all must have been heard about the topmost Data Science skills. To start with, the easiest, as well as an essential skill that every data science aspirant should acquire, is SQL.
Nowadays, most companies are going towards being data-driven. These data are stored in a database and are managed and processed through a Database Management system. DBMS makes our work so easy and organized. Hence, it is essential to integrate the most popular programming language with the incredible DBMS tool.
SQL is the most widely used programming language while working with databases and supported by various relational database systems, like MySQL, SQL Server, and Oracle. However, the SQL standard has some features that are implemented differently in different database systems. Thus, SQL becomes one of the most important concepts to be learned in this field of Data Science.
Image source: KDnuggets
SQL (Structured Query Language) is used for performing various operations on the data stored in the databases like updating records, deleting records, creating and modifying tables, views, etc. SQL is also the standard for the current big data platforms that use SQL as their key API for their relational databases.
Data Science is the all-around study of data. To work with data, we need to extract it from the database. This is where SQL comes into the picture. Relational Database Management is a crucial part of Data Science. A Data Scientist can control, define, manipulate, create, and query the database using SQL commands.
Many modern industries have equipped their products data management with NoSQL technology but, SQL remains the ideal choice for many business intelligence tools and in-office operations.
Many of the Database platforms are modeled after SQL. This is why it has become a standard for many database systems. Modern big data systems like Hadoop, Spark also make use of SQL only for maintaining the relational database systems and processing structured data.
We can say that:
1. A Data Scientist needs SQL to handle structured data. As the structured data is stored in relational databases. Therefore, to query these databases, a data scientist must have a good knowledge of SQL commands.
2.Big Data Platforms like Hadoop and Spark provide an extension for querying using SQL commands for manipulating.
3.SQL is the standard tool to experiment with data through the creation of test environments.
4. To perform analytics operations with the data that is stored in relational databases like Oracle, Microsoft SQL, MySQL, we need SQL.
5. SQL is also an essential tool for data wrangling and preparation. Therefore, while dealing with various Big Data tools, we make use of SQL.
Following are the key aspects of SQL which are most useful for Data Science. Every aspiring Data Scientists must know these necessary SQL skills and features.
As we all know that SQL is the most used Database Management Tool and Python is the most popular Data Science Language for its flexibility and wide range of libraries. There are various ways to use SQL with Python. Python provides multiple libraries that are developed and can utilize for this purpose. SQLite, PostgreSQL, and MySQL are examples of these libraries.
There are many use cases for when Data Scientists want to connect Python to SQL. Data Scientists need to connect a SQL database so that data coming from the web application can be stored. It also helps to communicate between different data sources.
There is no need to switch between different programming languages for data management. It makes Data scientists’ work more convenient. They will be able to use your Python skills to manipulate data stored in a SQL database. They don’t need a CSV file.
MySQL is a server-based database management system. One MySQL server can have multiple databases. A MySQL database consist two-step process for creating a database:
1. Make a connection to a MySQL server.
2. Execute separate queries to create the database and process data.
Let’s get started with MySQL with python
First, we will create a connection between the MySQL server and MySQL DB. For this, we will define a function that will establish a connection to the MySQL database server and will return the connection object:
!pip install mysql-connector-python
import mysql.connector
from mysql.connector import Error
def create_connection(host_name, user_name, user_password):
connection = None
try:
connection = mysql.connector.connect(
host=host_name,
user=user_name,
passwd=user_password
)
print("Connection to MySQL DB successful")
except Error as e:
print(f"The error '{e}' occurred")
return connection
connection = create_connection("localhost", "root", "")
In the above code, we have defined a function create_connection() that accepts following three parameters:
1. host_name
2. user_name
3. user_password
The mysql.connector is a Python SQL module that contains a method .connect() that is used to connect to a MySQL database server. When the connection is established, the connection object created will be returned to the calling function.
So far the connection is established successfully, now let’s create a database.
#we have created a function to create database that contions two parameters
#connection and query
def create_database(connection, query): #now we are creating an object cursor to execute SQL queries cursor = connection.cursor() try: #query to be executed will be passed in cursor.execute() in string form cursor.execute(query) print("Database created successfully") except Error as e: print(f"The error '{e}' occurred")
#now we are creating a database named example_app
create_database_query = "CREATE DATABASE example_app" create_database(connection, create_database_query)
#now will create database example_app on database server
#and also cretae connection between database and server
def create_connection(host_name, user_name, user_password, db_name): connection = None try: connection = mysql.connector.connect( host=host_name, user=user_name, passwd=user_password, database=db_name ) print("Connection to MySQL DB successful") except Error as e: print(f"The error '{e}' occurred") return connection
#calling the create_connection() and connects to the example_app database. connection = create_connection("localhost", "root", "", "example_app")
SQLite is probably the most uncomplicated database we can connect to a Python application since it’s a built-in module we don’t need to install any external Python SQL modules. By default, Python installation contains a Python SQL library named sqlite3 that can be used to interact with an SQLite database.
SQLite is a serverless database. It reads and writes data to a file. That means we don’t even need to install and run an SQLite server to perform database operations like MySQL and PostgreSQL!
Let’s use sqlite3 to connect to an SQLite database in Python:
import sqlite3 from sqlite3 import Error
def create_connection(path): connection = None try: connection = sqlite3.connect(path) print("Connection to SQLite DB successful")
except Error as e: print(f"The error '{e}' occurred") return connection
In the above code, we have imported sqlite3 and the module’s Error class. Then define a function called .create_connection() that will accept the path to the SQLite database. Then .connect() from the sqlite3 module will take the SQLite database path as a parameter. If the database exists at the path specified in .connect, then a connection to the database will be established. Otherwise, a new database is created at the specified path, and then a connection is established.
sqlite3.connect(path) will return a connection object, which was also returned by create_connection(). This connection object will be used to execute SQL queries on an SQLite database. The following line of code will create a connection to the SQLite database:
connection = create_connection("E:\example_app.sqlite")
Once the connection is established we can see the database file is created in the root directory and if we want, we can also change the location of the file.
In this article, we discuss how SQL is essential for Data Science and also how we can work with SQL using python. Thanks for reading. Do let me know your comments and feedback in the comment section.
#sql #datascience #database #programming #developer