Why Apostille is required and how to apply?

Introduction

During the Hague Convention in 1961, it was decided that there is no need to certify the documents when people are traveling within the member countries. So, if Apostille of documents is done in one country then it will be considered legal and genuine in other member countries as well. This helped the member countries to eliminate the time, effort and costs required to repeatedly verify the documents, whether at the source or destination country. India is a member of the Hague Convention since 2005, so Apostille India is deemed legalized in all the countries that are a member of the Hague Convention.

Why Apostille is required?

A country puts in lots of effort and have a number of measure in place to provide proper security to the nation, its citizens and businesses. Document verification is one such step that can avoid many future problems. If the documents are okay, then the intention of the visitor is okay. Apostille documents serve as proof that the applicant is genuine and there is no forgery. Therefore Apostille is required when people travel abroad. Whether you want to go for study, work, business or travel, the Apostille document will be required.

How to apply for Apostille?

There are various ways in which people can apply for Apostille in India. There is e-Sanad, a portal from which the applicants can apply for certificate attestation or Apostille documents. Since it is a new project, Apostille of a limited number of documents can be done through e-Sanad. The documents from DIA (Document Issuing Authority) that has online document depository available such as CBSE documents can be attested through e-Sanad.

As per the new rules any individual cannot directly apply for Apostille documents from MEA. There are authorized outsourced agencies that can apply for Apostille in India. So you can apply for your Apostille documents through such authorized Apostille services providers. Since these agencies are authorized by the government, your documents will be safe and secured. You will also receive your Apostille documents in time and without any hassle.

Important points to remember for Apostille India

1.       Apostille documents are issued by the Ministry of External Affairs. But before documents can be sent to MEA for Apostille it has to be attested by issuing authority and local state departments.

2.       An Apostille sticker will be attached to the document which makes it legalized in member countries of the Hague convention.

3.       Cost and duration can vary depending upon the type of document.

4.       MEA attests to the document after verifying the sign and seal of the issuing authority. It doesn’t take responsibility for the content of the document.

5.       Individuals must check their document has a seal and sign of issuing authority before applying for the Apostille document. For example, check if your transcript has the seal and sign of issuing authority of the university. If not, it cannot get attested.

6.       Individuals must check the Apostille services they are contacting are authorized. They must check the authorization number and certificate of the agencies before handing over their confidential documents.

Conclusion

By far you must have understood why Apostille is required. A country needs to maintain its safety and security. Similarly, it is also very important for an individual to keep their personal and confidential documents safe. Therefore, it is recommended that you properly check the authenticities of the agencies when you are looking for Apostille services to get your Apostille document. Worldwide transcripts is an authorized agency for processing Apostille in India. Contact us on the below-mentioned details to get your Apostille document now.

What is GEEK

Buddha Community

Why Apostille is required and how to apply?
Edward Jackson

Edward Jackson

1653377002

PySpark Cheat Sheet: Spark in Python

This PySpark cheat sheet with code samples covers the basics like initializing Spark in Python, loading data, sorting, and repartitioning.

Apache Spark is generally known as a fast, general and open-source engine for big data processing, with built-in modules for streaming, SQL, machine learning and graph processing. It allows you to speed analytic applications up to 100 times faster compared to technologies on the market today. You can interface Spark with Python through "PySpark". This is the Spark Python API exposes the Spark programming model to Python. 

Even though working with Spark will remind you in many ways of working with Pandas DataFrames, you'll also see that it can be tough getting familiar with all the functions that you can use to query, transform, inspect, ... your data. What's more, if you've never worked with any other programming language or if you're new to the field, it might be hard to distinguish between RDD operations.

Let's face it, map() and flatMap() are different enough, but it might still come as a challenge to decide which one you really need when you're faced with them in your analysis. Or what about other functions, like reduce() and reduceByKey()

PySpark cheat sheet

Even though the documentation is very elaborate, it never hurts to have a cheat sheet by your side, especially when you're just getting into it.

This PySpark cheat sheet covers the basics, from initializing Spark and loading your data, to retrieving RDD information, sorting, filtering and sampling your data. But that's not all. You'll also see that topics such as repartitioning, iterating, merging, saving your data and stopping the SparkContext are included in the cheat sheet. 

Note that the examples in the document take small data sets to illustrate the effect of specific functions on your data. In real life data analysis, you'll be using Spark to analyze big data.

PySpark is the Spark Python API that exposes the Spark programming model to Python.

Initializing Spark 

SparkContext 

>>> from pyspark import SparkContext
>>> sc = SparkContext(master = 'local[2]')

Inspect SparkContext 

>>> sc.version #Retrieve SparkContext version
>>> sc.pythonVer #Retrieve Python version
>>> sc.master #Master URL to connect to
>>> str(sc.sparkHome) #Path where Spark is installed on worker nodes
>>> str(sc.sparkUser()) #Retrieve name of the Spark User running SparkContext
>>> sc.appName #Return application name
>>> sc.applicationld #Retrieve application ID
>>> sc.defaultParallelism #Return default level of parallelism
>>> sc.defaultMinPartitions #Default minimum number of partitions for RDDs

Configuration 

>>> from pyspark import SparkConf, SparkContext
>>> conf = (SparkConf()
     .setMaster("local")
     .setAppName("My app")
     . set   ("spark. executor.memory",   "lg"))
>>> sc = SparkContext(conf = conf)

Using the Shell 

In the PySpark shell, a special interpreter-aware SparkContext is already created in the variable called sc.

$ ./bin/spark-shell --master local[2]
$ ./bin/pyspark --master local[s] --py-files code.py

Set which master the context connects to with the --master argument, and add Python .zip..egg or.py files to the

runtime path by passing a comma-separated list to  --py-files.

Loading Data 

Parallelized Collections 

>>> rdd = sc.parallelize([('a',7),('a',2),('b',2)])
>>> rdd2 = sc.parallelize([('a',2),('d',1),('b',1)])
>>> rdd3 = sc.parallelize(range(100))
>>> rdd = sc.parallelize([("a",["x","y","z"]),
               ("b" ["p","r,"])])

External Data 

Read either one text file from HDFS, a local file system or any Hadoop-supported file system URI with textFile(), or read in a directory of text files with wholeTextFiles(). 

>>> textFile = sc.textFile("/my/directory/•.txt")
>>> textFile2 = sc.wholeTextFiles("/my/directory/")

Retrieving RDD Information 

Basic Information 

>>> rdd.getNumPartitions() #List the number of partitions
>>> rdd.count() #Count RDD instances 3
>>> rdd.countByKey() #Count RDD instances by key
defaultdict(<type 'int'>,{'a':2,'b':1})
>>> rdd.countByValue() #Count RDD instances by value
defaultdict(<type 'int'>,{('b',2):1,('a',2):1,('a',7):1})
>>> rdd.collectAsMap() #Return (key,value) pairs as a dictionary
   {'a': 2, 'b': 2}
>>> rdd3.sum() #Sum of RDD elements 4950
>>> sc.parallelize([]).isEmpty() #Check whether RDD is empty
True

Summary 

>>> rdd3.max() #Maximum value of RDD elements 
99
>>> rdd3.min() #Minimum value of RDD elements
0
>>> rdd3.mean() #Mean value of RDD elements 
49.5
>>> rdd3.stdev() #Standard deviation of RDD elements 
28.866070047722118
>>> rdd3.variance() #Compute variance of RDD elements 
833.25
>>> rdd3.histogram(3) #Compute histogram by bins
([0,33,66,99],[33,33,34])
>>> rdd3.stats() #Summary statistics (count, mean, stdev, max & min)

Applying Functions 

#Apply a function to each RFD element
>>> rdd.map(lambda x: x+(x[1],x[0])).collect()
[('a' ,7,7, 'a'),('a' ,2,2, 'a'), ('b' ,2,2, 'b')]
#Apply a function to each RDD element and flatten the result
>>> rdd5 = rdd.flatMap(lambda x: x+(x[1],x[0]))
>>> rdd5.collect()
['a',7 , 7 ,  'a' , 'a' , 2,  2,  'a', 'b', 2 , 2, 'b']
#Apply a flatMap function to each (key,value) pair of rdd4 without changing the keys
>>> rdds.flatMapValues(lambda x: x).collect()
[('a', 'x'), ('a', 'y'), ('a', 'z'),('b', 'p'),('b', 'r')]

Selecting Data

Getting

>>> rdd.collect() #Return a list with all RDD elements 
[('a', 7), ('a', 2), ('b', 2)]
>>> rdd.take(2) #Take first 2 RDD elements 
[('a', 7),  ('a', 2)]
>>> rdd.first() #Take first RDD element
('a', 7)
>>> rdd.top(2) #Take top 2 RDD elements 
[('b', 2), ('a', 7)]

Sampling

>>> rdd3.sample(False, 0.15, 81).collect() #Return sampled subset of rdd3
     [3,4,27,31,40,41,42,43,60,76,79,80,86,97]

Filtering

>>> rdd.filter(lambda x: "a" in x).collect() #Filter the RDD
[('a',7),('a',2)]
>>> rdd5.distinct().collect() #Return distinct RDD values
['a' ,2, 'b',7]
>>> rdd.keys().collect() #Return (key,value) RDD's keys
['a',  'a',  'b']

Iterating 

>>> def g (x): print(x)
>>> rdd.foreach(g) #Apply a function to all RDD elements
('a', 7)
('b', 2)
('a', 2)

Reshaping Data 

Reducing

>>> rdd.reduceByKey(lambda x,y : x+y).collect() #Merge the rdd values for each key
[('a',9),('b',2)]
>>> rdd.reduce(lambda a, b: a+ b) #Merge the rdd values
('a', 7, 'a' , 2 , 'b' , 2)

 

Grouping by

>>> rdd3.groupBy(lambda x: x % 2) #Return RDD of grouped values
          .mapValues(list)
          .collect()
>>> rdd.groupByKey() #Group rdd by key
          .mapValues(list)
          .collect() 
[('a',[7,2]),('b',[2])]

Aggregating

>> seqOp = (lambda x,y: (x[0]+y,x[1]+1))
>>> combOp = (lambda x,y:(x[0]+y[0],x[1]+y[1]))
#Aggregate RDD elements of each partition and then the results
>>> rdd3.aggregate((0,0),seqOp,combOp) 
(4950,100)
#Aggregate values of each RDD key
>>> rdd.aggregateByKey((0,0),seqop,combop).collect() 
     [('a',(9,2)), ('b',(2,1))]
#Aggregate the elements of each partition, and then the results
>>> rdd3.fold(0,add)
     4950
#Merge the values for each key
>>> rdd.foldByKey(0, add).collect()
[('a' ,9), ('b' ,2)]
#Create tuples of RDD elements by applying a function
>>> rdd3.keyBy(lambda x: x+x).collect()

Mathematical Operations 

>>>> rdd.subtract(rdd2).collect() #Return each rdd value not contained in rdd2
[('b' ,2), ('a' ,7)]
#Return each (key,value) pair of rdd2 with no matching key in rdd
>>> rdd2.subtractByKey(rdd).collect()
[('d', 1)1
>>>rdd.cartesian(rdd2).collect() #Return the Cartesian product of rdd and rdd2

Sort 

>>> rdd2.sortBy(lambda x: x[1]).collect() #Sort RDD by given function
[('d',1),('b',1),('a',2)]
>>> rdd2.sortByKey().collect() #Sort (key, value) ROD by key
[('a' ,2), ('b' ,1), ('d' ,1)]

Repartitioning 

>>> rdd.repartition(4) #New RDD with 4 partitions
>>> rdd.coalesce(1) #Decrease the number of partitions in the RDD to 1

Saving 

>>> rdd.saveAsTextFile("rdd.txt")
>>> rdd.saveAsHadoopFile("hdfs:// namenodehost/parent/child",
               'org.apache.hadoop.mapred.TextOutputFormat')

Stopping SparkContext 

>>> sc.stop()

Execution 

$ ./bin/spark-submit examples/src/main/python/pi.py

Have this Cheat Sheet at your fingertips

Original article source at https://www.datacamp.com

#pyspark #cheatsheet #spark #python

Monty  Boehm

Monty Boehm

1656190800

Extension Functionality Which Uses Stan.jl, DynamicHMC.jl, Turing.jl

DiffEqBayes.jl

This repository is a set of extension functionality for estimating the parameters of differential equations using Bayesian methods. It allows the choice of using CmdStan.jl, Turing.jl, DynamicHMC.jl and ApproxBayes.jl to perform a Bayesian estimation of a differential equation problem specified via the DifferentialEquations.jl interface.

To begin you first need to add this repository using the following command.

Pkg.add("DiffEqBayes")
using DiffEqBayes

Tutorials and Documentation

For information on using the package, see the stable documentation. Use the in-development documentation for the version of the documentation, which contains the unreleased features.

Example

using ParameterizedFunctions, OrdinaryDiffEq, RecursiveArrayTools, Distributions
f1 = @ode_def LotkaVolterra begin
 dx = a*x - x*y
 dy = -3*y + x*y
end a

p = [1.5]
u0 = [1.0,1.0]
tspan = (0.0,10.0)
prob1 = ODEProblem(f1,u0,tspan,p)

σ = 0.01                         # noise, fixed for now
t = collect(1.:10.)   # observation times
sol = solve(prob1,Tsit5())
priors = [Normal(1.5, 1)]
randomized = VectorOfArray([(sol(t[i]) + σ * randn(2)) for i in 1:length(t)])
data = convert(Array,randomized)

using CmdStan #required for using the Stan backend
bayesian_result_stan = stan_inference(prob1,t,data,priors)

bayesian_result_turing = turing_inference(prob1,Tsit5(),t,data,priors)

using DynamicHMC #required for DynamicHMC backend
bayesian_result_hmc = dynamichmc_inference(prob1, Tsit5(), t, data, priors)

bayesian_result_abc = abc_inference(prob1, Tsit5(), t, data, priors)

Using save_idxs to declare observables

You don't always have data for all of the variables of the model. In case of certain latent variables you can utilise the save_idxs kwarg to declare the oberved variables and run the inference using any of the backends as shown below.

 sol = solve(prob1,Tsit5(),save_idxs=[1])
 randomized = VectorOfArray([(sol(t[i]) + σ * randn(1)) for i in 1:length(t)])
 data = convert(Array,randomized)

 using CmdStan #required for using the Stan backend
 bayesian_result_stan = stan_inference(prob1,t,data,priors,save_idxs=[1])

 bayesian_result_turing = turing_inference(prob1,Tsit5(),t,data,priors,save_idxs=[1])
 
 using DynamicHMC #required for DynamicHMC backend
 bayesian_result_hmc = dynamichmc_inference(prob1,Tsit5(),t,data,priors,save_idxs = [1])

 bayesian_result_abc = abc_inference(prob1,Tsit5(),t,data,priors,save_idxs=[1])

Author: SciML
Source Code: https://github.com/SciML/DiffEqBayes.jl 
License: View license

#julia #machinelearning 

Hermann  Frami

Hermann Frami

1652881020

Serverless Plugin AWS Contributor insights

serverless-plugin-aws-contributor-insights

This plugin allows to use serverless framework to deploy contributor insights for AWS CloudWatch.

To use this plugin, please specify the following

plugins:
  - serverless-plugin-aws-contributor-insights

custom:
  contributor-insights:
    - ruleBody: "{\"Schema\":{\"Name\":\"CloudWatchLogRule\",\"Version\":1},\"AggregateOn\":\"Count\",\"Contribution\":{\"Filters\":[{\"Match\":\"$.status\",\"GreaterThan\":500}],\"Keys\":[\"$.path\",\"$.status\"]},\"LogFormat\":\"JSON\",\"LogGroupNames\":[\"\/aws\/apigateway\/*\"]}" #REQUIRED
      ruleName: rule-1 #REQUIRED
      ruleId: ruleid1 #OPTIONAL
      ruleState: ENABLED #REQUIRED
      tags: #OPTIONAL
        - Key: key1
          Value: value1
        - Key: Key2
          Value: value2
    - ruleBody: #Supports yaml notation for ruleBody
        Schema:
          Name: CloudWatchLogRule
          Version: 1
        LogGroupNames:
        - API-Gateway-Access-Logs*
        - Log-group-name2
        LogFormat: JSON
        Contribution:
          Keys:
          - "$.ip"
          ValueOf: "$.requestBytes"
          Filters:
          - Match: "$.httpMethod"
            In:
            - PUT
        AggregateOn: Sum
      ruleName: rule-2
      ruleId: ruleid2
      ruleState: ENABLED
      tags:
        - Key: key3
          Value: value3
        - Key: key4
          Value: value4

Author: Kangcifong
Source Code: https://github.com/kangcifong/serverless-plugin-aws-contributor-insights 
License: MIT license

#aws #serverless #plugin 

笹田  洋介

笹田 洋介

1657717680

了解 Python 中的串行讀取 - PySerial

串行端口是串行通信接口,信息通過它一次一位地順序傳輸。另一方面,並行端口同時傳輸多個位。PySerial 和 python 串行讀取等功能使與串行端口的通信更容易。

關於 PySerial 包

安裝了 PySerial 包的計算機上的 Python 可以與外部硬件通信。對於問題解決者來說,它是一個有用的軟件包,因為它促進了計算機和外部硬件(如電壓表、流量計、燈和其他通過端口發送信息的設備)之間的數據交換。

安裝模塊

PySerial 包不是 Python 標準庫的一部分。因此,手動安裝。Python 的 Anaconda Distributions 帶有預安裝的軟件包。

PIP 安裝 命令

$ pip install pyserial

Anaconda 提示命令

> conda install pyserial

導入模塊 + 驗證安裝

安裝後,可以使用以下命令驗證版本。

import serial
print(serial.version)

關於功能

serial.read()

參數——傳遞一個整數值來指定要返回的字節數。

Returns – 為我們提供指定的字節數

使用 Python 串行讀取功能從串行端口獲取信息

Python串口讀取是該模塊的一個重要功能。它使我們能夠獲取從端口提供的信息。這是一個幫助我們這樣做的 Python 實現。

with serial.Serial('/my/sample1', 3443, timeout=1) as serial:
     readOneByte = serial.read()          
     readTenByte = serial.read(10)        

解釋

By default, .read() reads one byte at a time. By providing an integer value, you can set how many bytes of information are to be read by the function. 

Python 串行讀取與 Readline

serial.read()serial.readline()
serial.read()將一次返回一個字節。serial.readline()將返回所有字節,直到達到 EOL。
如果在函數中指定了一個整數,它將返回那麼多字節。
例如:
serial.read(20)
將返回 20 個字節。
可以使用而不是使用serial.read()過度迭代serial.readline()

在 Raspberry Pi 上使用串行讀取 + 寫入

通過運行以下命令確保您的 Raspberry Pi 是最新的

sudo apt update
sudo apt upgrade

讀取數據

ser = serial.Serial(
        # Serial Port to read the data from
        port='/dev/ttyUSB0',
 
        #Rate at which the information is shared to the communication channel
        baudrate = 9600,
   
        #Applying Parity Checking (none in this case)
        parity=serial.PARITY_NONE,
 
       # Pattern of Bits to be read
        stopbits=serial.STOPBITS_ONE,
     
        # Total number of bits to be read
        bytesize=serial.EIGHTBITS,
 
        # Number of serial commands to accept before timing out
        timeout=1
)
# Pause the program for 1 second to avoid overworking the serial port
while 1:
        x=ser.readline()
        print x

寫入數據

import time
import serial
 
ser = serial.Serial(
        # Serial Port to read the data from
        port='/dev/ttyUSB0',
 
        #Rate at which the information is shared to the communication channel
        baudrate = 9600,
   
        #Applying Parity Checking (none in this case)
        parity=serial.PARITY_NONE,
 
       # Pattern of Bits to be read
        stopbits=serial.STOPBITS_ONE,
     
        # Total number of bits to be read
        bytesize=serial.EIGHTBITS,
 
        # Number of serial commands to accept before timing out
        timeout=1
)
counter=0
 
  
# Mentions the Current Counter number for each line written
# Pauses for one second each iteration to avoid overworking the port
while 1:
        ser.write("Write counter: %d \n"%(counter))
        time.sleep(1)
        counter += 1

十六進制格式的 Python 串行讀取

使用該.hex() 函數,我們將字節數據以十六進制格式存儲在變量中hexData

import serial
 
ser = serial.Serial(
    port='/samplePort/ttyUSB1',
    baudrate=115200,
    parity=serial.PARITY_NONE,
    stopbits=serial.STOPBITS_ONE,
    bytesize=serial.EIGHTBITS,
    timeout = None
)
 
while 1:
    print ser.read()
    hexData= ser.read().hex()
    print hexData

Python pySerial in_waiting 函數

此函數可用於檢索輸入緩衝區中的字節數。

返回(S) - 整數

參數- 無

函數 out_waiting() 執行類似。它提供輸出緩衝區中的字節數。

這是一個實現上述功能的示例程序。

import serial
ser = serial.Serial(
    port = '/samplePort/myUSB',
    baudrate = 5000,
    parity = serial.PARITY_NONE,
    stopbits = serial.STOPBITS_ONE,
    bytesize = serial.EIGHTBITS,
    timeout=0.5, 
    inter_byte_timeout=0.1
    )
 
# Reads one byte of information
myBytes = ser.read(1) 
 
# Checks for more bytes in the input buffer
bufferBytes = ser.inWaiting()
 
# If exists, it is added to the myBytes variable with previously read information
if bufferBytes:
    myBytes = myBytes + ser.read(bufferBytes)
    print(myBytes)

Python pySerial flush() 函數

flush()消除文件對象內部緩衝區的內容。它不接受任何參數,也不返回任何內容。有 2 種類型的刷新函數:

  • flushInput()– 清除輸入緩衝區
  • flushOutput()– 清除輸出緩衝區
import serial
ser = serial.Serial('/samplePort/myUSB10')
# Clearing Input Buffer
ser.flushInput()
 
# Clearing Output Buffer
ser.flushOutput()
ser.write("get") 
 
# Pause for 100 milliseconds
sleep(.1)
print ser.read()

使用 Python 串行讀取從 Arduino 板讀取數據

Arduino 是一個開源電子平台,提供易於使用的硬件和軟件。Arduino 板可以讀取傳感器、按鈕上的手指或 Twitter 消息的輸入,然後以電機、LED 甚至文本的形式輸出。

import serial
import time
 
ser = serial.Serial('COM4', 9800, timeout=1)
time.sleep(2)
 
for i in range(50):
    # Reading all bytes available bytes till EOL
    line = ser.readline() 
    if line:
        # Converting Byte Strings into unicode strings
        string = line.decode()  
        # Converting Unicode String into integer
        num = int(string) 
        print(num)
 
ser.close()

常見錯誤

PythonAttrib出來了Error: 'module' object has no attribute 'Serial'

對於這個問題,嘗試將您的項目文件重命名為“serial.py”。serial.pyc如果存在則刪除。之後運行import serial

此外,由於您正在導入的包與您的項目文件具有相同的名稱,因此會出現此問題。

Python 串口讀取常見問題

我們可以在串行讀取中一次讀取多個字節嗎?

.read()函數一次只接收一個字節。但是,我們可以迭代函數以在多個循環中一次接收一個字節。這是相當多餘的。將.readline()讀取一組完整的字節,直到達到 EOL。

結論

我們了解到,在 PySerial Module 和 Python Serial Read 的幫助下,我們能夠通過串行端口處理來自設備的信息。

來源:  https ://www.pythonpool.com

#python 

Lilly  Wilson

Lilly Wilson

1657710360

Aprenda Sobre La Lectura En Serie En Python - PySerial

Los puertos serie son interfaces de comunicación serie a través de las cuales la información se transfiere secuencialmente un bit a la vez. Los puertos paralelos, por otro lado, transmiten múltiples bits simultáneamente. El PySerial y funciones como la lectura en serie de python facilitan la comunicación con los puertos serie.

Acerca del paquete PySerial

Python en una computadora con el paquete PySerial instalado puede comunicarse con hardware externo. Es un paquete útil para resolver problemas porque facilita el intercambio de datos entre computadoras y hardware externo como voltímetros, medidores de flujo, luces y otros dispositivos que envían información a través de puertos.

Instalación del módulo

El paquete PySerial NO forma parte de la biblioteca estándar de Python. Por lo tanto, instálelo manualmente. Anaconda Distributions of Python viene con el paquete preinstalado.

Comando de instalación de PIP

$ pip install pyserial

Comando rápido de Anaconda

> conda install pyserial

Importación del módulo + Verificación de la instalación

Después de la instalación, la versión se puede verificar con el siguiente comando.

import serial
print(serial.version)

Acerca de la función

serial.read()

Argumentos : pase un valor entero para especificar el número de bytes que se devolverán.

Devoluciones : nos proporciona una cantidad de bytes especificados

Uso de la función de lectura en serie de Python para obtener información de los puertos en serie

La lectura en serie de Python es una función importante del módulo. Nos permite rastrillar la información que se proporciona desde los puertos. Aquí hay una implementación de Python que nos ayuda a hacerlo.

with serial.Serial('/my/sample1', 3443, timeout=1) as serial:
     readOneByte = serial.read()          
     readTenByte = serial.read(10)        

Explicación

By default, .read() reads one byte at a time. By providing an integer value, you can set how many bytes of information are to be read by the function. 

Lectura en serie de Python frente a Readline

serial.read()serial.readline()
serial.read()devolverá un byte a la vez.serial.readline()devolverá todos los bytes hasta que llegue a EOL.
Si se especifica un número entero dentro de la función, devolverá esa cantidad de bytes.
Ej:
serial.read(20)
devolverá 20 bytes.
En lugar de usar serial.read()sobre iteraciones, serial.readline()se pueden usar.

Uso de lectura y escritura en serie en Raspberry Pi

Asegúrese de que su Raspberry Pi esté actualizado ejecutando los siguientes comandos

sudo apt update
sudo apt upgrade

Lectura de datos

ser = serial.Serial(
        # Serial Port to read the data from
        port='/dev/ttyUSB0',
 
        #Rate at which the information is shared to the communication channel
        baudrate = 9600,
   
        #Applying Parity Checking (none in this case)
        parity=serial.PARITY_NONE,
 
       # Pattern of Bits to be read
        stopbits=serial.STOPBITS_ONE,
     
        # Total number of bits to be read
        bytesize=serial.EIGHTBITS,
 
        # Number of serial commands to accept before timing out
        timeout=1
)
# Pause the program for 1 second to avoid overworking the serial port
while 1:
        x=ser.readline()
        print x

Escritura de datos

import time
import serial
 
ser = serial.Serial(
        # Serial Port to read the data from
        port='/dev/ttyUSB0',
 
        #Rate at which the information is shared to the communication channel
        baudrate = 9600,
   
        #Applying Parity Checking (none in this case)
        parity=serial.PARITY_NONE,
 
       # Pattern of Bits to be read
        stopbits=serial.STOPBITS_ONE,
     
        # Total number of bits to be read
        bytesize=serial.EIGHTBITS,
 
        # Number of serial commands to accept before timing out
        timeout=1
)
counter=0
 
  
# Mentions the Current Counter number for each line written
# Pauses for one second each iteration to avoid overworking the port
while 1:
        ser.write("Write counter: %d \n"%(counter))
        time.sleep(1)
        counter += 1

Lectura en serie de Python en formato hexadecimal

Usando la .hex() función estamos almacenando los datos de bytes en formato hexadecimal en la variablehexData

import serial
 
ser = serial.Serial(
    port='/samplePort/ttyUSB1',
    baudrate=115200,
    parity=serial.PARITY_NONE,
    stopbits=serial.STOPBITS_ONE,
    bytesize=serial.EIGHTBITS,
    timeout = None
)
 
while 1:
    print ser.read()
    hexData= ser.read().hex()
    print hexData

Python pySerial in_waiting Función

Esta función se puede utilizar para recuperar el número de bytes en el búfer de entrada .

Retorno(s) – Entero

Argumentos – Ninguno

La función out_waiting() funciona de manera similar. Proporciona el número de bytes en el búfer de salida.

Aquí hay un programa de ejemplo que implementa dicha función.

import serial
ser = serial.Serial(
    port = '/samplePort/myUSB',
    baudrate = 5000,
    parity = serial.PARITY_NONE,
    stopbits = serial.STOPBITS_ONE,
    bytesize = serial.EIGHTBITS,
    timeout=0.5, 
    inter_byte_timeout=0.1
    )
 
# Reads one byte of information
myBytes = ser.read(1) 
 
# Checks for more bytes in the input buffer
bufferBytes = ser.inWaiting()
 
# If exists, it is added to the myBytes variable with previously read information
if bufferBytes:
    myBytes = myBytes + ser.read(bufferBytes)
    print(myBytes)

Función Python pySerial flush()

flush()elimina el contenido del búfer interno de un objeto de archivo. No toma ningún argumento ni devuelve nada. Hay 2 tipos de funciones de descarga:

  • flushInput()– Borra el búfer de entrada
  • flushOutput()– Borra el búfer de salida
import serial
ser = serial.Serial('/samplePort/myUSB10')
# Clearing Input Buffer
ser.flushInput()
 
# Clearing Output Buffer
ser.flushOutput()
ser.write("get") 
 
# Pause for 100 milliseconds
sleep(.1)
print ser.read()

Lectura de datos de placas Arduino usando Python Serial Read

Arduino es una plataforma electrónica de código abierto que proporciona hardware y software fáciles de usar. Las placas Arduino pueden leer entradas de sensores, un dedo en un botón o un mensaje de Twitter, que luego emiten en forma de motores, LED o incluso texto.

import serial
import time
 
ser = serial.Serial('COM4', 9800, timeout=1)
time.sleep(2)
 
for i in range(50):
    # Reading all bytes available bytes till EOL
    line = ser.readline() 
    if line:
        # Converting Byte Strings into unicode strings
        string = line.decode()  
        # Converting Unicode String into integer
        num = int(string) 
        print(num)
 
ser.close()

Errores comunes

Python Attribestá fueraError: 'module' object has no attribute 'Serial'

Para este problema, intente cambiar el nombre de su archivo de proyecto a 'serial.py'. Eliminar serial.pycsi existe. Después de esa carrera import serial.

Además, este problema ocurre debido al hecho de que el paquete que está importando tiene el mismo nombre que su archivo de proyecto.

Preguntas frecuentes sobre la lectura en serie de Python

¿Podemos leer varios bytes a la vez en lectura en serie?

La .read()función solo recibe un byte a la vez. Sin embargo, podemos iterar la función para recibir un byte a la vez en múltiples bucles. Esto es bastante redundante. El .readline()leerá un conjunto completo de bytes hasta que se alcance el EOL.

Conclusión

Aprendimos que con la ayuda del módulo PySerial y Python Serial Read podemos manejar información de dispositivos a través de puertos serie.

Fuente:  https://www.pythonpool.com

#python