1670445960
It’s no wonder many developers view testing as a necessary evil that saps time and energy: Testing can be tedious, unproductive, and entirely too complicated.
My first experience with testing was awful. I worked on a team that had strict code coverage requirements. The workflow was: implement a feature, debug it, and write tests to ensure full code coverage. The team didn’t have integration tests, only unit tests with tons of manually initialized mocks, and most unit tests tested trivial manual mappings while using a library to perform automatic mappings. Every test tried to assert every available property, so every change broke dozens of tests.
I disliked working with tests because they were perceived as a time-consuming burden. However, I didn’t give up. The confidence testing provides and the automation of checks after every small change piqued my interest. I started reading and practicing, and learned that tests, when done right, could be both helpful and enjoyable.
In this article, I share eight automated testing best practices I wish I had known from the beginning.
Automated testing is often focused on the future, but when you implement it correctly, you benefit immediately. Using tools that help you do your job better can save time and make your work more enjoyable.
Imagine you’re developing a system that retrieves purchase orders from the company’s ERP and places those orders with a vendor. You have the price of previously ordered items in the ERP, but the current prices may be different. You want to control whether to place an order at a lower or higher price. You have user preferences stored, and you’re writing code to handle price fluctuations.
How would you check that the code works as expected? You would probably:
You stopped at the breakpoint and can go step by step to see what will happen for one scenario, but there are many possible scenarios:
Preferences | ERP price | Vendor price | Should we place the order? | |
---|---|---|---|---|
Allow higher price | Allow lower price | |||
false | false | 10 | 10 | true |
(Here there would be three more preference combinations, but prices are equal, so the result is the same.) | ||||
true | false | 10 | 11 | true |
true | false | 10 | 9 | false |
false | true | 10 | 11 | false |
false | true | 10 | 9 | true |
true | true | 10 | 11 | true |
true | true | 10 | 9 | true |
In case of a bug, the company may lose money, harm its reputation, or both. You need to check multiple scenarios and repeat the check loop several times. Doing so manually would be tedious. But tests are here to help!
Tests let you create any context without calls to unstable APIs. They eliminate the need for repetitive clicking through old and slow interfaces that are all too common in legacy ERP systems. All you have to do is define the context for the unit or subsystem and then any debugging, troubleshooting, or scenario exploring happens instantly—you run the test and you are back to your code. My preference is to set up a keybinding in my IDE that repeats my previous test run, giving immediate, automated feedback as I make changes.
Compared to manual debugging and self-testing, automated tests are more productive from the very beginning, even before any testing code is committed. After you check that your code behaves as expected—by manually testing or perhaps, for a more complex module, by stepping through it with a debugger during testing—you can use assertions to define what you expect for any combination of input parameters.
With tests passing, you’re almost ready to commit, but not quite. Prepare to refactor your code because the first working version usually isn’t elegant. Would you perform that refactoring without tests? That’s questionable because you’d have to complete all the manual steps again, which could diminish your enthusiasm.
What about the future? While performing any refactoring, optimization, or feature addition, tests help ensure that a module still behaves as expected after you change it, thereby instilling lasting confidence and allowing developers to feel better equipped to tackle upcoming work.
It’s counterproductive to think about tests as a burden or something that makes only code reviewers or leads happy. Tests are a tool that we as developers benefit from. We like when our code works and we don’t like to spend time on repetitive actions or on fixing code to address bugs.
Recently, I worked on refactoring in my codebase and asked my IDE to clean up unused using
directives. To my surprise, tests showed several failures in my email reporting system. However, it was a valid fail—the cleanup process removed some using
directives in my Razor (HTML + C#) code for an email template, and the template engine was not able to build valid HTML as a result. I didn’t expect that such a minor operation would break email reporting. Testing helped me avoid spending hours catching bugs all over the app right before its release, when I assumed that everything would work.
Of course, you have to know how to use tools and not cut your proverbial fingers. It might seem that defining the context is tedious and can be harder than running the app, that tests require too much maintenance to avoid becoming stale and useless. These are valid points and we will address them.
Developers often grow to dislike automated tests because they are trying to mock a dozen dependencies only to check if they’re called by the code. Alternatively, developers encounter a high-level test and try to reproduce every application state to check all variations in a small module. These patterns are unproductive and tedious, but we can avoid them by leveraging different test types as they were intended. (Tests should be practical and enjoyable, after all!)
Readers will need to know what unit tests are and how to write them, and be familiar with integration tests—if not, it’s worth pausing here to get up to speed.
There are dozens of testing types, but these five common types make an extremely effective combination:
Five Common Types of Tests
It’s not always necessary to work with all five testing types from the beginning. In most cases, you can go a long way with the first three tests.
We’ll briefly examine the use cases of each type to help you select the right ones for your needs.
Recall the example with different prices and handling preferences. It’s a good candidate for unit testing because we care only about what is happening inside the module, and the results have important business ramifications.
The module has a lot of different combinations of input parameters, and we want to get a valid return value for every combination of valid arguments. Unit tests are good at ensuring validity because they provide direct access to the input parameters of the function or method and you don’t have to write dozens of test methods to cover every combination. In many languages, you can avoid duplicating test methods by defining a method, which accepts arguments needed for your code and expected results. Then, you can use your test tooling to provide different sets of values and expectations for that parameterized method.
Integration tests are a good fit for cases when you are interested in how a module interacts with its dependencies, other modules, or the infrastructure. You still use direct method calls but there’s no access to submodules, so trying to test all scenarios for all input methods of all submodules is impractical.
Typically, I prefer to have one success scenario and one failure scenario per module.
I like to use integration tests to check if a dependency injection container is built successfully, whether a processing or calculation pipeline returns the expected result, or whether complex data was read and converted correctly from a database or third-party API.
These tests give you the most confidence that your app works because they verify that your app can at least start without a runtime error. It’s a little more work to start testing your code without direct access to its classes, but once you understand and write the first few tests, you’ll find it’s not too difficult.
Run the application by starting a process with command-line arguments, if needed, and then use the application as your prospective customer would: by calling API endpoints or pressing buttons. This is not difficult, even in the case of UI testing: Each major platform has a tool to find a visual element in a UI.
Functional tests let you know if your app works in a testing environment but what about a production environment? Suppose you’re working with several third-party APIs and you want to have a dashboard of their states or want to see how your application handles incoming requests. These are common use cases for canary tests.
They operate by briefly acting on the working system without causing side effects to third-party systems. For example, you can register a new user or check product availability without placing an order.
The purpose of canary tests is to be sure that all major components are working together in a production environment, not failing because of, for example, credential issues.
Load tests reveal whether your application will continue to work when large numbers of people start using it. They’re similar to canary and functional tests but aren’t conducted in local or production environments. Usually, a special staging environment is used, which is similar to the production environment.
It’s important to note that these tests do not use real third-party services, which might be unhappy with external load testing of their production services and may charge extra as a result.
When devising your automated test plan, each type of test should be separated so as to be able to run independently. While this requires extra organization, it is worthwhile because mixing tests can create problems.
These tests have different:
It’s important to note that with most languages and tech stacks, you can group, for example, all unit tests together with subfolders named after functional modules. This is convenient, reduces friction when creating new functional modules, is easier for automated builds, results in less clutter, and is one more way to simplify testing.
Imagine a situation in which you’ve written some tests, but after pulling your repo a few weeks later, you notice those tests are no longer passing.
This is an unpleasant reminder that tests are code and, like any other piece of code, they need to be maintained. The best time for this is right before the moment you think you’ve finished your work and want to see if everything still operates as intended. You have all the context needed and you can fix the code or change the failing tests more easily than your colleague working on a different subsystem. But this moment only exists in your mind, so the most common way to run tests is automatically after a push to the development branch or after creating a pull request.
This way, your main branch will always be in a valid state, or you will, at least, have a clear indication of its state. An automated building and testing pipeline—or a CI pipeline—helps:
Configuring this pipeline takes time, but the pipeline can reveal a range of issues before they reach users or clients, even when you’re the sole developer.
Once running, CI also reveals new issues before they have a chance to grow in scope. As such, I prefer to set it up right after writing the first test. You can host your code in a private repository on GitHub and set up GitHub Actions. If your repo is public, you have even more options than GitHub Actions. For instance, my automated test plan runs on AppVeyor, for a project with a database and three types of tests.
I prefer to structure my pipeline for production projects as follows:
There are no canary tests or load tests. Because of their specifics and requirements, they should be initiated manually.
Writing unit tests for all code is a common strategy, but sometimes this wastes time and energy, and doesn’t give you any confidence. If you’re familiar with the “testing pyramid” concept, you may think that all of your code must be covered with unit tests, with only a subset covered by other, higher-level tests.
I don’t see any need to write a unit test that ensures that several mocked dependencies are called in the desired order. Doing that requires setting up several mocks and verifying all the calls, but it still would not give me the confidence that the module is working. Usually, I only write an integration test that uses real dependencies and checks only the result; that gives me some confidence that the pipeline in the tested module is working properly.
In general, I write tests that make my life easier while implementing functionality and supporting it later.
For most applications, aiming for 100% code coverage adds a great deal of tedious work and eliminates the joy from working with tests and programming in general. As Martin Fowler’s Test Coverage puts it:
Test coverage is a useful tool for finding untested parts of a codebase. Test coverage is of little use as a numeric statement of how good your tests are.
Thus I recommend you install and run the coverage analyzer after writing some tests. The report with highlighted lines of code will help you better understand its execution paths and find uncovered places that should be covered. Also, looking at your getters, setters, and facades, you’ll see why 100% coverage is no fun.
From time to time, I see questions like, “How can I test private methods?” You don’t. If you’ve asked that question, something has already gone wrong. Usually, it means you violated the Single Responsibility Principle, and your module doesn’t do something properly.
Refactor this module and pull the logic you think is important into a separate module. There’s no problem with increasing the number of files, which will lead to the code structured as Lego bricks: very readable, maintainable, replaceable, and testable.
Refactoring a module to resemble Lego bricks.
Properly structuring code is easier said than done. Here are two suggestions:
It’s worth learning about the principles and ideas of functional programming. Most mainstream languages, like C, C++, C#, Java, Assembly, JavaScript, and Python, force you to write programs for machines. Functional programming is better suited to the human brain.
This may seem counterintuitive at first, but consider this: A computer will be fine if you put all of your code in a single method, use a shared memory chunk to store temporary values, and use a fair amount of jump instructions. Moreover, compilers in the optimization stage sometimes do this. However, the human brain doesn’t easily handle this approach.
Functional programming forces you to write pure functions without side effects, with strong types, in an expressive manner. That way it’s much easier to reason about a function because the only thing it produces is its return value. The Programming Throwdown podcast episode Functional Programming With Adam Gordon Bell will help you to gain a basic understanding, and you can continue with the Corecursive episodes God’s Programming Language With Philip Wadler and Category Theory With Bartosz Milewski. The last two greatly enriched my perception of programming.
I recommend mastering TDD. The best way to learn is to practice. String Calculator Kata is a great way to practice with code kata. Mastering the kata will take time but will ultimately allow you to fully absorb the idea of TDD, which will help you create well-structured code that is a delight to work with and also testable.
One note of caution: Sometimes you’ll see TDD purists claiming that TDD is the only right way to program. In my opinion, it is simply another useful tool in your toolbox, nothing more.
Sometimes, you need to see how to adjust modules and processes in relation to each other and don’t know what data and signatures to use. In such cases, write code until it compiles, and then write tests to troubleshoot and debug the functionality.
In other cases, you know the input and the output you want, but have no idea how to write the implementation properly because of complicated logic. For those cases, it’s easier to start following the TDD procedure and build your code step by step rather than spend time thinking about the perfect implementation.
It’s a pleasure to work in a neatly organized code environment without unnecessary distractions. That’s why it’s important to apply SOLID, KISS, and DRY principles to tests—utilizing refactoring when it’s needed.
Sometimes I hear comments like, “I hate working in a heavily tested codebase because every change requires me to fix dozens of tests.” That’s a high-maintenance problem caused by tests that aren’t focused and try to test too much. The principle of “Do one thing well” applies to tests too: “Test one thing well”; each test should be relatively short and test only one concept. “Test one thing well” doesn’t mean that you should be limited to one assertion per test: You can use dozens if you’re testing non-trivial and important data mapping.
This focus is not limited to one specific test or type of test. Imagine dealing with complicated logic that you tested using unit tests, such as mapping data from the ERP system to your structure, and you have an integration test that is accessing mock ERP APIs and returning the result. In that case, it’s important to remember what your unit test already covers so you don’t test the mapping again in integration tests. Usually, it’s enough to ensure the result has the correct identification field.
With code structured like Lego bricks and focused tests, changes to business logic should not be painful. If changes are radical, you simply drop the file and its related tests, and make a new implementation with new tests. In case of minor changes, you typically change one to three tests to meet the new requirements and make changes to the logic. It’s fine to change tests; you can think about this practice as double-entry bookkeeping.
Other ways to achieve simplicity include:
TestServices.Get()
without manually creating dependencies. That way it will be easy to read, maintain, and write new tests because you already have useful helpers in place.If you feel a test is becoming too complicated, simply stop and think. Either the module or your test needs to be refactored.
You will face many tedious tasks while testing. For example, setting up test environments or data objects, configuring stubs and mocks for dependencies, and so on. Luckily, every mature tech stack contains several tools to make these tasks much less tedious.
I suggest you write your first hundred tests if you haven’t already, then invest some time to identify repetitive tasks and learn about testing-related tooling for your tech stack.
For inspiration, here are some tools you can use:
Happiness is tightly coupled with the so-called “flow” experience described in detail in the book Flow: The Psychology of Optimal Experience. To achieve that flow experience, you must be engaged in an activity with a clear set of goals and be able to see your progress. Tasks should result in immediate feedback, for which automated tests are ideal. You also need to strike a balance between challenges and skills, which is up to every individual. Tests, particularly when approached with TDD, can help guide you and instill confidence. They help you to set specific goals, with each passed test being an indicator of your progress.
The right approach to testing can make you happier and more productive, and tests decrease the chances of burnout. The key is to view testing as a tool (or toolset) that can help you in your daily development routine, not as a burdensome step for future-proofing your code.
Testing is a necessary part of programming that allows software engineers to improve the way they work, deliver the best results, and use their time optimally. Perhaps even more importantly, tests can help developers enjoy their work more, thus boosting their morale and motivation.
Original article source at: https://www.toptal.com/
1666362840
Reverse automated differentiation from an expression or a function
This package provides a function rdiff()
that generates valid Julia code for the calculation of derivatives up to any order for a user supplied expression or generic function. Install with Pkg.add("ReverseDiffSource")
. Package documentation and examples can be found here.
This version of automated differentiation operates at the source level (provided either in an expression or a generic function) to output Julia code calculating the derivatives (in a expression or a function respectively). Compared to other automated differentiation methods it does not rely on method overloading or new types and should, in principle, produce fast code.
Usage examples:
julia> rdiff( :(x^3) , x=Float64) # 'x=Float64' indicates the type of x to rdiff
:(begin
(x^3,3 * x^2.0) # expression calculates a tuple of (value, derivate)
end)
sin(x)
(notice the simplifications) julia> rdiff( :(sin(x)) , order=10, x=Float64) # derivatives up to order 10
:(begin
_tmp1 = sin(x)
_tmp2 = cos(x)
_tmp3 = -_tmp1
_tmp4 = -_tmp2
_tmp5 = -_tmp3
(_tmp1,_tmp2,_tmp3,_tmp4,_tmp5,_tmp2,_tmp3,_tmp4,_tmp5,_tmp2,_tmp3)
end)
julia> rosenbrock(x) = (1 - x[1])^2 + 100(x[2] - x[1]^2)^2 # function to be derived
julia> rosen2 = rdiff(rosenbrock, (Vector{Float64},), order=2) # orders up to 2
(anonymous function)
# w1-w3 are the hidden layer weight matrices, x1 the input vector
function ann(w1, w2, w3, x1)
x2 = w1 * x1
x2 = log(1. + exp(x2)) # soft RELU unit
x3 = w2 * x2
x3 = log(1. + exp(x3)) # soft RELU unit
x4 = w3 * x3
1. / (1. + exp(-x4[1])) # sigmoid output
end
w1, w2, w3 = randn(10,10), randn(10,10), randn(1,10)
x1 = randn(10)
dann = m.rdiff(ann, (Matrix{Float64}, Matrix{Float64}, Matrix{Float64}, Vector{Float64}))
dann(w1, w2, w3, x1) # network output + gradient on w1, w2, w3 and x1
Author: JuliaAttic
Source Code: https://github.com/JuliaAttic/ReverseDiffSource.jl
License: MIT license
1650464344
Todos los días necesitamos algunas herramientas de automatización para resolver nuestras tareas diarias e incluso necesitamos ayuda de automatización en nuestros Proyectos. En este artículo, conocerá 10 scripts de automatización de Python que resolverán sus problemas cotidianos. Haz un marcador para este artículo y déjalo ir.
O eres el que crea la automatización o estás siendo automatizado.
—Tom Preston-Werner
👉 Edición de fotos
Edite sus fotos con este asombroso script de automatización que utiliza el módulo Pillow . A continuación, hice una lista de funciones de edición de imágenes que puede usar en su proyecto de Python o resolver cualquier problema de la vida diaria.
Este script es un puñado de códigos de fragmentos para programadores que necesitan editar sus imágenes mediante programación.
# Photo Editing
# pip install pillowfrom PIL import Image, ImageFilter# Resize an image
img = Image.open('img.jpg')
resize = img.resize((200, 300))
resize.save('output.jpg')# Blur Image
img = Image.open('img.jpg')
blur = img.filter(ImageFilter.BLUR)
blur.save('output.jpg')# Sharp Image
img = Image.open('img.jpg')
sharp = img.filter(ImageFilter.SHARPEN)
sharp.save('output.jpg')# Crop Image
img = Image.open('img.jpg')
crop = img.crop((0, 0, 50, 50))
crop.save('output.jpg')# Rotate Image
img = Image.open('img.jpg')
rotate = img.rotate(90)
rotate.save('output.jpg')# Flip Image
img = Image.open('img.jpg')
flip = img.transpose(Image.FLIP_LEFT_RIGHT)
flip.save('output.jpg')# Transpose Image
img = Image.open('img.jpg')
transpose = img.transpose(Image.TRANSPOSE)
transpose.save('output.jpg')# Convert Image to GreyScale
img = Image.open('img.jpg')
convert = img.convert('L')
convert.save('output.jpg')
👉 Marcador de agua PDF
Este script de automatización simplemente lo ayudará a marcar con agua su PDF página por página. Este script usa el módulo PyPDF4 para leer y agregar marcas de agua. Echa un vistazo al código a continuación:
# Watermark PDF files
# pip install PyPDF4import PyPDF4def Watermark():
pdf_file= "test.pdf"
output_pdf= "output.pdf"
watermark= "watermark.pdf"watermark_read = PyPDF4.PdfFileReader(watermark)
watermark_page = watermark_read.getPage(0)
pdf_reader = PyPDF4.PdfFileReader(pdf_file)
pdf_writer = PyPDF4.PdfFileWriter()
for page in range(pdf_reader.getNumPages()):page = pdf_reader.getPage(page)
page.mergePage(watermark_page)
pdf_writer.addPage(page)
# writing output pdf file
with open(output_pdf, 'wb') as pdf:
pdf_writer.write(pdf)Watermark()
👉 Edición de videos
Ahora edite su video programáticamente con este script de automatización. Utiliza el módulo Moviepy para editar el video. El siguiente script es un código útil para recortar videos, agregar VFX y agregar audio a partes específicas del video. Puede explorar Moviepy más para obtener más funciones.
# Video Editing
# pip install moviepyfrom moviepy.editor import *# Triming the videoclip_1 = VideoFileClip("sample_video.mp4").subclip(40, 50)
clip_2 = VideoFileClip("sample_video.mp4").subclip(68, 91)final_clip = concatenate_videoclips([clip_1, clip_2])final_clip.write_videofile("output.mp4")# Adding VFXclip_1 = (VideoFileClip("sample_video.mp4").subclip(40, 50).fx(vfx.colorx, 1.2).fx(vfx.lum_contrast, 0, 30, 100))
clip_2 = (VideoFileClip("sample_video.mp4").subclip(68, 91).fx(vfx.invert_colors))final_clip = concatenate_videoclips([clip_1, clip_2])final_clip.write_videofile("output.mp4")# Add Audio to Videoclip = VideoFileClip("sample_video.mp4")# Add audio to only first 5 sec
clip = clip.subclip(0, 5)
audioclip = AudioFileClip("audio.mp3").subclip(0, 5)
videoclip = clip.set_audio(audioclip)final_clip.write_videofile("output.mp4")
👉 IA de voz a texto
Viste mi código sobre la conversión de texto a voz , pero ¿sabes que también podemos convertir voz a texto en Python? Este increíble código te mostrará cómo hacerlo. Verifique el código a continuación:
# Convert Speech to Text
#pip install SpeechRecognitionimport speech_recognition as srdef SpeechToText():Ai = sr.Recognizer()
with sr.Microphone() as source:
listening = Ai.listen(source, phrase_time_limit = 6)
try:
command = Ai.recognize_google(listening).lower()
print("You said: " + command)
except sr.UnknownValueError:
print("Sorry Can't understand, Try again")
SpeechToText()
👉 Solicitar API
Necesita llamar a una solicitud de API y luego pruebe el siguiente script. El script usa el módulo de solicitud Beautiful que puede obtener/publicar datos en cualquier llamada a la API. El siguiente código tiene dos partes, una es obtener el código fuente HTML y la segunda es iniciar sesión en el sitio.
# Request Api
# pip install requestsimport requests# Get Dataheaders = {
"Connection": "keep-alive",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36"
}r = requests.get('https://api.example.com', headers=headers)
print(r.status_code) # 200
print(r.headers['content-type'])
print(r.content) # HTML Data# Login Site
payload = {'username': 'USERNAME', 'userpass': 'PASSWORD'}
r = requests.post('https://example.com/login', data=payload)
print(r.status_code) # 200
👉 GUI de Python
Este script le ayudará a crear programas Python de interfaz gráfica de usuario . Utiliza el último módulo PyQt6 y codifiqué la mayoría de los widgets importantes a continuación:
# Python GUI
# pip install PyQt6import sys
from PyQt6.QtWidgets import QApplication, QWidget, QPushButton, QMessageBox, QLabel, QLineEditdef Application():app = QApplication(sys.argv)
win = QWidget()
win.resize(300, 300)
win.move(200, 200)
win.setWindowTitle('Medium Article')# Create Buttons
btn = QPushButton('Quit', win)
# Message Box
QMessageBox.question(win, 'Message',"Are you sure to quit?")# Label Text
lbl = QLabel('Hello World', win)# Button Clicked
btn.clicked.connect(lambda: QMessageBox.question(win, 'Message',"Are you sure to quit?"))# Entry Box
entry = QLineEdit(win)win.show()
sys.exit(app.exec())if __name__ == '__main__':
Application()
👉 Corrector ortográfico
Tenga muchos documentos y texto enorme y si desea verificar la ortografía, este script de Python lo ayudará a resolver su problema. Utiliza el módulo Pyspellchecker para verificar la ortografía y dar sugerencias de corrección.
# Spell Checker in Python
# pip install pyspellcheckerfrom spellchecker import SpellChecker as spellWords = spell.unknown(['Python' , 'is' , 'a' , 'good' , 'lantyguage'])for w in Words:
print(spell.correction(w)) #language
print(spell.candidates(w)) #{ language }
👉 Corrector de gramática
Inspirado en Grammarly, ¿por qué no intentar crear su propio corrector gramatical en Python? El siguiente script lo ayudará a verificar su gramática, utiliza el módulo Gingerit, que es un módulo basado en API.
# Grammer Checker in Python
# pip install gingeritfrom gingerit.gingerit import GingerIttext = "Welcm Progammer to Python"Grammer = GingerIt()
Correction = Grammer.parse(text)print(Correction["result"]) # Welcome, Programmer to Python
print(Correction['corrections'])
👉 Automatice Win, Mac y Linux
Tenemos aplicaciones web y teléfonos inteligentes automatizados , ¿por qué no los sistemas operativos? Este script de automatización automatizará Win, Mac y Linux usando el módulo PyautoGui en Python. ¡Prueba el Código ahora!
# Automate Win, Mac and Linux
# pip install PyAutoGUIimport pyautogui as py# Mouse Movements
py.moveTo(100, 100)
py.moveTo(200, 200, duration=1)
py.click(100, 100)
py.doubleClick(200, 200)# Keyboard Inputs
py.write('Hello World!', interval=0.25)
py.press('enter')
py.hotkey('ctrl', 'c')
py.keyDown('shift')
py.keyUp('shift')# Screen Automation
img = py.screenshot('screenshot.jpg')
img.save('screenshot.jpg')loc = py.locationsOnScreen('icon.jpg')
print(loc)
👉 Leer Excel
Probablemente use Pandas para leer archivos CSV, pero ¿sabe que también puede leer archivos de Excel? Eche un vistazo al siguiente script para saber cómo funciona:
# Read Excel
# pip install pandasimport pandas as pddf = pd.read_excel('test.xlsx', sheet_name='Sheet1')# Read Columnsname = df['Name'].to_list()
Id = df['Id'].to_list()print(name) # ["haider", "Dustin, "Tadashi"]
print(Id) # [245, 552, 892]
👉Pensamientos finales
Bueno, me alegro de que hayas llegado al final de este artículo y espero que hayas encontrado algo útil. Si te gusta este artículo, no olvides compartirlo ❤️ con tus amigos y presionar el Aplauso 👏 para recibir el aprecio de los programadores.
¡Feliz codificación!
Fuente: https://python.plainenglish.io/10-python-automation-scripts-for-everyday-problems-3ca0f2011282
1650464220
毎日、日常のタスクを解決するための自動化ツールが必要であり、プロジェクトでの自動化の支援も必要です。この記事では、日常の問題を解決する10個のPython自動化スクリプトについて説明します。この記事のブックマークを作成して、手放します。
自動化を作成するのはあなたか、自動化されているのはあなたです。
—トム・プレストン・ウェルナー
👉写真編集
Pillowモジュールを使用するこの素晴らしい自動化スクリプトを使用して写真を編集します。以下に、Pythonプロジェクトで使用したり、日常生活の問題を解決したりできる画像編集機能のリストを作成しました。
このスクリプトは、画像をプログラムで編集する必要があるプログラマー向けのスニペットコードです。
# Photo Editing
# pip install pillowfrom PIL import Image, ImageFilter# Resize an image
img = Image.open('img.jpg')
resize = img.resize((200, 300))
resize.save('output.jpg')# Blur Image
img = Image.open('img.jpg')
blur = img.filter(ImageFilter.BLUR)
blur.save('output.jpg')# Sharp Image
img = Image.open('img.jpg')
sharp = img.filter(ImageFilter.SHARPEN)
sharp.save('output.jpg')# Crop Image
img = Image.open('img.jpg')
crop = img.crop((0, 0, 50, 50))
crop.save('output.jpg')# Rotate Image
img = Image.open('img.jpg')
rotate = img.rotate(90)
rotate.save('output.jpg')# Flip Image
img = Image.open('img.jpg')
flip = img.transpose(Image.FLIP_LEFT_RIGHT)
flip.save('output.jpg')# Transpose Image
img = Image.open('img.jpg')
transpose = img.transpose(Image.TRANSPOSE)
transpose.save('output.jpg')# Convert Image to GreyScale
img = Image.open('img.jpg')
convert = img.convert('L')
convert.save('output.jpg')
👉PDFウォーターマーカー
この自動化スクリプトは、PDFをページごとに透かしするのに役立ちます。このスクリプトは、PyPDF4モジュールを使用して、透かしを読み取って追加します。以下のコードを確認してください。
# Watermark PDF files
# pip install PyPDF4import PyPDF4def Watermark():
pdf_file= "test.pdf"
output_pdf= "output.pdf"
watermark= "watermark.pdf"watermark_read = PyPDF4.PdfFileReader(watermark)
watermark_page = watermark_read.getPage(0)
pdf_reader = PyPDF4.PdfFileReader(pdf_file)
pdf_writer = PyPDF4.PdfFileWriter()
for page in range(pdf_reader.getNumPages()):page = pdf_reader.getPage(page)
page.mergePage(watermark_page)
pdf_writer.addPage(page)
# writing output pdf file
with open(output_pdf, 'wb') as pdf:
pdf_writer.write(pdf)Watermark()
👉ビデオ編集
次に、この自動化スクリプトを使用してビデオをプログラムで編集します。Moviepyモジュールを使用してビデオを編集します。以下のスクリプトは、ビデオをトリミングし、VFXを追加し、ビデオの特定の部分にオーディオを追加するための便利なコードです。さらなる機能については、 Moviepyをさらに詳しく調べることができます。
# Video Editing
# pip install moviepyfrom moviepy.editor import *# Triming the videoclip_1 = VideoFileClip("sample_video.mp4").subclip(40, 50)
clip_2 = VideoFileClip("sample_video.mp4").subclip(68, 91)final_clip = concatenate_videoclips([clip_1, clip_2])final_clip.write_videofile("output.mp4")# Adding VFXclip_1 = (VideoFileClip("sample_video.mp4").subclip(40, 50).fx(vfx.colorx, 1.2).fx(vfx.lum_contrast, 0, 30, 100))
clip_2 = (VideoFileClip("sample_video.mp4").subclip(68, 91).fx(vfx.invert_colors))final_clip = concatenate_videoclips([clip_1, clip_2])final_clip.write_videofile("output.mp4")# Add Audio to Videoclip = VideoFileClip("sample_video.mp4")# Add audio to only first 5 sec
clip = clip.subclip(0, 5)
audioclip = AudioFileClip("audio.mp3").subclip(0, 5)
videoclip = clip.set_audio(audioclip)final_clip.write_videofile("output.mp4")
👉スピーチからテキストAI
テキストから音声への変換に関する私のコードを見ましたが、Pythonでも音声をテキストに変換できることをご存知ですか。この素晴らしいコードは、それを行う方法を示します。以下のコードを確認してください。
# Convert Speech to Text
#pip install SpeechRecognitionimport speech_recognition as srdef SpeechToText():Ai = sr.Recognizer()
with sr.Microphone() as source:
listening = Ai.listen(source, phrase_time_limit = 6)
try:
command = Ai.recognize_google(listening).lower()
print("You said: " + command)
except sr.UnknownValueError:
print("Sorry Can't understand, Try again")
SpeechToText()
👉リクエストAPI
APIリクエストを呼び出す必要があります。次に、以下のスクリプトを試してください。スクリプトは、任意のAPI呼び出しでデータを取得/投稿できるBeautifulリクエストモジュールを使用します。以下のコードには2つの部分があり、1つはHTMLソースコードの取得であり、もう1つはサイトへのログインです。
# Request Api
# pip install requestsimport requests# Get Dataheaders = {
"Connection": "keep-alive",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36"
}r = requests.get('https://api.example.com', headers=headers)
print(r.status_code) # 200
print(r.headers['content-type'])
print(r.content) # HTML Data# Login Site
payload = {'username': 'USERNAME', 'userpass': 'PASSWORD'}
r = requests.post('https://example.com/login', data=payload)
print(r.status_code) # 200
👉PythonGUI
このスクリプトを使用すると、グラフィカルユーザーインターフェイスのPythonプログラムを作成できます。最新のPyQt6モジュールを使用しており、以下の重要なウィジェットのほとんどをコーディングしました。
# Python GUI
# pip install PyQt6import sys
from PyQt6.QtWidgets import QApplication, QWidget, QPushButton, QMessageBox, QLabel, QLineEditdef Application():app = QApplication(sys.argv)
win = QWidget()
win.resize(300, 300)
win.move(200, 200)
win.setWindowTitle('Medium Article')# Create Buttons
btn = QPushButton('Quit', win)
# Message Box
QMessageBox.question(win, 'Message',"Are you sure to quit?")# Label Text
lbl = QLabel('Hello World', win)# Button Clicked
btn.clicked.connect(lambda: QMessageBox.question(win, 'Message',"Are you sure to quit?"))# Entry Box
entry = QLineEdit(win)win.show()
sys.exit(app.exec())if __name__ == '__main__':
Application()
👉スペルチェッカー
たくさんのドキュメントと巨大なテキストがあり、スペルをチェックしたい場合は、このPythonスクリプトが問題の解決に役立ちます。Pyspellcheckerモジュールを使用して、呪文をチェックし、修正の提案をします。
# Spell Checker in Python
# pip install pyspellcheckerfrom spellchecker import SpellChecker as spellWords = spell.unknown(['Python' , 'is' , 'a' , 'good' , 'lantyguage'])for w in Words:
print(spell.correction(w)) #language
print(spell.candidates(w)) #{ language }
👉文法チェッカー
Grammarlyに触発されて、Pythonで独自の文法チェッカーを作成してみませんか。以下のスクリプトは、APIベースのモジュールであるGingeritモジュールを使用する文法を確認するのに役立ちます。
# Grammer Checker in Python
# pip install gingeritfrom gingerit.gingerit import GingerIttext = "Welcm Progammer to Python"Grammer = GingerIt()
Correction = Grammer.parse(text)print(Correction["result"]) # Welcome, Programmer to Python
print(Correction['corrections'])
👉Win、Mac、およびLinuxを自動化する
自動化されたウェブアプリとスマートフォンがありますが、オペレーティングシステムはどうでしょうか。この自動化スクリプトは、PythonのPyautoGuiモジュールを使用して、Win、Mac、およびLinuxを自動化します。今すぐコードをお試しください!
# Automate Win, Mac and Linux
# pip install PyAutoGUIimport pyautogui as py# Mouse Movements
py.moveTo(100, 100)
py.moveTo(200, 200, duration=1)
py.click(100, 100)
py.doubleClick(200, 200)# Keyboard Inputs
py.write('Hello World!', interval=0.25)
py.press('enter')
py.hotkey('ctrl', 'c')
py.keyDown('shift')
py.keyUp('shift')# Screen Automation
img = py.screenshot('screenshot.jpg')
img.save('screenshot.jpg')loc = py.locationsOnScreen('icon.jpg')
print(loc)
👉Excelを読む
あなたはおそらくCSVファイルを読むためにパンダを使用しますが、Excelファイルも読むことができることを知っていますか。次のスクリプトを見て、どのように機能するかを確認してください。
# Read Excel
# pip install pandasimport pandas as pddf = pd.read_excel('test.xlsx', sheet_name='Sheet1')# Read Columnsname = df['Name'].to_list()
Id = df['Id'].to_list()print(name) # ["haider", "Dustin, "Tadashi"]
print(Id) # [245, 552, 892]
👉最終的な考え
さて、あなたがこの記事の終わりに到達したことをうれしく思います、そしてあなたが何か役に立つものを見つけたことを願っています。この記事が気に入ったら、❤️itを友達と共有してClap👏を押すのを忘れないでください。プログラマーから感謝の気持ちを伝えます。
ハッピーコーディング!
ソース:https ://python.plainenglish.io/10-python-automation-scripts-for-everyday-problems-3ca0f2011282
1649499000
Release It! 🚀
🚀 Generic CLI tool to automate versioning and package publishing related tasks:
package.json
)Use release-it for version management and publish to anywhere with its versatile configuration, a powerful plugin system, and hooks to execute any command you need to test, build, and/or publish your project.
Although release-it is a generic release tool, installation requires npm. To use release-it, a package.json
file is not required. The recommended way to install release-it also adds basic configuration. Answer one or two questions and it's ready:
npm init release-it
Alternatively, install it manually, and add the release
script to package.json
:
npm install --save-dev release-it
{
"name": "my-package",
"version": "1.0.0",
"scripts": {
"release": "release-it"
},
"devDependencies": {
"release-it": "*"
}
}
Now you can run npm run release
from the command line (any release-it arguments behind the --
):
npm run release
npm run release -- minor --ci
Use release-it in any (non-npm) project, take it for a test drive, or install it globally:
# Run release-it from anywhere (without installation)
npx release-it
# Install globally and run from anywhere
npm install --global release-it
release-it
Release a new version:
release-it
You will be prompted to select the new version, and more prompts will follow based on your setup.
Run release-it from the root of the project to prevent potential issues.
Use --dry-run
to show the interactivity and the commands it would execute.
→ See Dry Runs for more details.
To print the next version without releasing anything, add the --release-version
flag.
Out of the box, release-it has sane defaults, and plenty of options to configure it. Most projects use a .release-it.json
in the project root, or a release-it
property in package.json
.
→ See Configuration for more details.
Here's a quick example .release-it.json
:
{
"git": {
"commitMessage": "chore: release v${version}"
},
"github": {
"release": true
}
}
By default, release-it is interactive and allows you to confirm each task before execution:
By using the --ci
option, the process is fully automated without prompts. The configured tasks will be executed as demonstrated in the first animation above. On a Continuous Integration (CI) environment, this non-interactive mode is activated automatically.
Use --only-version
to use a prompt only to determine the version, and automate the rest.
How does release-it determine the latest version?
package.json
, its version
will be used (see npm to skip this).0.0.0
will be used as the latest version.Alternatively, a plugin can be used to override this (e.g. to manage a VERSION
or composer.json
file):
Add the --release-version
flag to print the next version without releasing anything.
Git projects are supported well by release-it, automating the tasks to stage, commit, tag and push releases to any Git remote.
→ See Git for more details.
GitHub projects can have releases attached to Git tags, containing release notes and assets. There are two ways to add GitHub releases in your release-it flow:
GITHUB_TOKEN
)→ See GitHub Releases for more details.
GitLab projects can have releases attached to Git tags, containing release notes and assets. To automate GitLab releases:
gitlab.release: true
→ See GitLab Releases for more details.
By default, release-it generates a changelog, to show and help select a version for the new release. Additionally, this changelog serves as the release notes for the GitHub or GitLab release.
The default command is based on git log ...
. This setting (git.changelog
) can be overridden. To further customize the release notes for the GitHub or GitLab release, there's github.releaseNotes
or gitlab.releaseNotes
. Make sure any of these commands output the changelog to stdout
. Plugins are available for:
→ See Changelog for more details.
With a package.json
in the current directory, release-it will let npm
bump the version in package.json
(and package-lock.json
if present), and publish to the npm registry.
→ See Publish to npm for more details.
With release-it, it's easy to create pre-releases: a version of your software that you want to make available, while it's not in the stable semver range yet. Often "alpha", "beta", and "rc" (release candidate) are used as identifier for pre-releases. An example pre-release version is 2.0.0-beta.0
.
→ See Manage pre-releases for more details.
Use --no-increment
to not increment the last version, but update the last existing tag/version.
This may be helpful in cases where the version was already incremented. Here's a few example scenarios:
release-it --no-increment --no-npm
to skip the npm publish
and try pushing the same Git tag again.Use script hooks to run shell commands at any moment during the release process (such as before:init
or after:release
).
The format is [prefix]:[hook]
or [prefix]:[plugin]:[hook]
:
part | value |
---|---|
prefix | before or after |
plugin | version , git , npm , github , gitlab |
hook | init , bump , release |
Use the optional :plugin
part in the middle to hook into a life cycle method exactly before or after any plugin.
The core plugins include version
, git
, npm
, github
, gitlab
.
Note that hooks like after:git:release
will not run when either the git push
failed, or when it is configured not to be executed (e.g. git.push: false
). See execution order for more details on execution order of plugin lifecycle methods.
All commands can use configuration variables (like template strings). An array of commands can also be provided, they will run one after another. Some example release-it configuration:
{
"hooks": {
"before:init": ["npm run lint", "npm test"],
"after:my-plugin:bump": "./bin/my-script.sh",
"after:bump": "npm run build",
"after:git:release": "echo After git push, before github release",
"after:release": "echo Successfully released ${name} v${version} to ${repo.repository}."
}
}
The variables can be found in the default configuration. Additionally, the following variables are exposed:
version
latestVersion
changelog
name
repo.remote, repo.protocol, repo.host, repo.owner, repo.repository, repo.project
All variables are available in all hooks. The only exception is that the additional variables listed above are not yet available in the init
hook.
Use --verbose
to log the output of the commands.
For the sake of verbosity, the full list of hooks is actually: init
, beforeBump
, bump
, beforeRelease
, release
or afterRelease
. However, hooks like before:beforeRelease
look weird and are usually not useful in practice.
Since v11, release-it can be extended in many, many ways. Here are some plugins:
Plugin | Description |
---|---|
@release-it/bumper | Read & write the version from/to any file |
@release-it/conventional-changelog | Provides recommended bump, conventional-changelog, and updates CHANGELOG.md |
@release-it/keep-a-changelog | Maintain CHANGELOG.md using the Keep a Changelog standards |
release-it-lerna-changelog | Integrates lerna-changelog into the release-it pipeline |
release-it-yarn-workspaces | Releases each of your projects configured workspaces |
release-it-calver-plugin | Enables Calendar Versioning (calver) with release-it |
@grupoboticario/news-fragments | An easy way to generate your changelog file |
@j-ulrich/release-it-regex-bumper | Regular expression based version read/write plugin for release-it |
Internally, release-it uses its own plugin architecture (for Git, GitHub, GitLab, npm).
→ See all release-it plugins on npm.
→ See plugins for documentation to write plugins.
Deprecated. Please see distribution repository for more details.
Use --disable-metrics
to opt-out of sending some anonymous statistical data to Google Analytics. For details, refer to lib/metrics.js. Please consider to not opt-out: more data means more support for future development.
release-it --verbose
(or -V
), release-it prints the output of every user-defined hook.release-it -VV
, release-it also prints the output of every internal command.DEBUG=release-it:* release-it [...]
to print configuration and more error details.Use verbose: 2
in a configuration file to have the equivalent of -VV
on the command line.
While mostly used as a CLI tool, release-it can be used as a dependency to integrate in your own scripts. See use release-it programmatically for example code.
Author: Release-it
Source Code: https://github.com/release-it/release-it
License: MIT License
1646912322
https://www.blog.duomly.com/python-automation-ideas/
Python is a versatile language that can be used for various automation tasks. You can use Python for automating file or folder management, generating reports from data stored in a database, monitoring logs on your servers, creating website scrapers, among other things.
If you’re looking for ways to automate tasks with Python, be sure to read the article above for some ideas.
#python #Python #automation #automated #businesses #business #startup #startups
1645210800
How to Shift from Manual to Automated Testing
1642726860
PHP_CodeSniffer is a set of two PHP scripts; the main phpcs
script that tokenizes PHP, JavaScript and CSS files to detect violations of a defined coding standard, and a second phpcbf
script to automatically correct coding standard violations. PHP_CodeSniffer is an essential development tool that ensures your code remains clean and consistent.
PHP_CodeSniffer requires PHP version 5.4.0 or greater, although individual sniffs may have additional requirements such as external applications and scripts. See the Configuration Options manual page for a list of these requirements.
If you're using PHP_CodeSniffer as part of a team, or you're running it on a CI server, you may want to configure your project's settings using a configuration file.
The easiest way to get started with PHP_CodeSniffer is to download the Phar files for each of the commands:
# Download using curl
curl -OL https://squizlabs.github.io/PHP_CodeSniffer/phpcs.phar
curl -OL https://squizlabs.github.io/PHP_CodeSniffer/phpcbf.phar
# Or download using wget
wget https://squizlabs.github.io/PHP_CodeSniffer/phpcs.phar
wget https://squizlabs.github.io/PHP_CodeSniffer/phpcbf.phar
# Then test the downloaded PHARs
php phpcs.phar -h
php phpcbf.phar -h
If you use Composer, you can install PHP_CodeSniffer system-wide with the following command:
composer global require "squizlabs/php_codesniffer=*"
Make sure you have the composer bin dir in your PATH. The default value is ~/.composer/vendor/bin/
, but you can check the value that you need to use by running composer global config bin-dir --absolute
.
Or alternatively, include a dependency for squizlabs/php_codesniffer
in your composer.json
file. For example:
{
"require-dev": {
"squizlabs/php_codesniffer": "3.*"
}
}
You will then be able to run PHP_CodeSniffer from the vendor bin directory:
./vendor/bin/phpcs -h
./vendor/bin/phpcbf -h
If you use Phive, you can install PHP_CodeSniffer as a project tool using the following commands:
phive install phpcs
phive install phpcbf
You will then be able to run PHP_CodeSniffer from the tools directory:
./tools/phpcs -h
./tools/phpcbf -h
If you use PEAR, you can install PHP_CodeSniffer using the PEAR installer. This will make the phpcs
and phpcbf
commands immediately available for use. To install PHP_CodeSniffer using the PEAR installer, first ensure you have installed PEAR and then run the following command:
pear install PHP_CodeSniffer
You can also download the PHP_CodeSniffer source and run the phpcs
and phpcbf
commands directly from the Git clone:
git clone https://github.com/squizlabs/PHP_CodeSniffer.git
cd PHP_CodeSniffer
php bin/phpcs -h
php bin/phpcbf -h
The default coding standard used by PHP_CodeSniffer is the PEAR coding standard. To check a file against the PEAR coding standard, simply specify the file's location:
$ phpcs /path/to/code/myfile.php
Or if you wish to check an entire directory you can specify the directory location instead of a file.
$ phpcs /path/to/code-directory
If you wish to check your code against the PSR-12 coding standard, use the --standard
command line argument:
$ phpcs --standard=PSR12 /path/to/code-directory
If PHP_CodeSniffer finds any coding standard errors, a report will be shown after running the command.
Full usage information and example reports are available on the usage page.
The documentation for PHP_CodeSniffer is available on the Github wiki.
Bug reports and feature requests can be submitted on the Github Issue Tracker.
See CONTRIBUTING.md for information.
PHP_CodeSniffer uses a MAJOR.MINOR.PATCH
version number format.
The MAJOR
version is incremented when:
phpcs
or phpcbf
commands are used, orruleset.xml
format, orThe MINOR
version is incremented when:
phpcs
and phpcbf
commands, orruleset.xml
format, orNOTE: Backwards-compatible changes to the API used by sniff developers will allow an existing sniff to continue running without producing fatal errors but may not result in the sniff reporting the same errors as it did previously without changes being required.
The PATCH
version is incremented when:
NOTE: As PHP_CodeSniffer exists to report and fix issues, most bugs are the result of coding standard errors being incorrectly reported or coding standard errors not being reported when they should be. This means that the messages produced by PHP_CodeSniffer, and the fixes it makes, are likely to be different between PATCH versions.
Author: Squizlabs
Source Code: https://github.com/squizlabs/PHP_CodeSniffer
License: BSD-3-Clause License
1642073280
Finance Quant Machine Learning
Easily develop state of the art time series models to forecast univariate data series. Simply load your data and select which models you want to test. This is the largest repository of automated structural and machine learning time series models. Please get in contact if you want to contribute a model. This is a fledgling project, all advice appreciated.
pip install atspy
ARIMA
- Automated ARIMA ModellingProphet
- Modeling Multiple Seasonality With Linear or Non-linear GrowthHWAAS
- Exponential Smoothing With Additive Trend and Additive SeasonalityHWAMS
- Exponential Smoothing with Additive Trend and Multiplicative SeasonalityNBEATS
- Neural basis expansion analysis (now fixed at 20 Epochs)Gluonts
- RNN-based Model (now fixed at 20 Epochs)TATS
- Seasonal and Trend no Box CoxTBAT
- Trend and Box CoxTBATS1
- Trend, Seasonal (one), and Box CoxTBATP1
- TBATS1 but Seasonal Inference is Hardcoded by PeriodicityTBATS2
- TBATS1 With Two Seasonal PeriodsAutomatedModel(df)
.am.models_dict_in
for in-sample and am.models_dict_out
for out-of-sample prediction.from atspy import AutomatedModel
The data requires strict preprocessing, no periods can be skipped and there cannot be any empty values.
import pandas as pd
df = pd.read_csv("https://raw.githubusercontent.com/firmai/random-assets-two/master/ts/monthly-beer-australia.csv")
df.Month = pd.to_datetime(df.Month)
df = df.set_index("Month"); df
Megaliters | |
---|---|
Month | |
1956-01-01 | 93.2 |
1956-02-01 | 96.0 |
1956-03-01 | 95.2 |
1956-04-01 | 77.1 |
1956-05-01 | 70.9 |
AutomatedModel
- Returns a class instance.forecast_insample
- Returns an in-sample forcasted dataframe and performance.forecast_outsample
- Returns an out-of-sample forcasted dataframe.ensemble
- Returns the results of three different forms of ensembles.models_dict_in
- Returns a dictionary of the fully trained in-sample models.models_dict_out
- Returns a dictionary of the fully trained out-of-sample models.from atspy import AutomatedModel
model_list = ["HWAMS","HWAAS","TBAT"]
am = AutomatedModel(df = df , model_list=model_list,forecast_len=20 )
Other models to try, add as many as you like; note ARIMA
is slow: ["ARIMA","Gluonts","Prophet","NBEATS", "TATS", "TBATS1", "TBATP1", "TBATS2"]
forecast_in, performance = am.forecast_insample(); forecast_in
Target | HWAMS | HWAAS | TBAT | |
---|---|---|---|---|
Date | ||||
1985-10-01 | 181.6 | 161.962148 | 162.391653 | 148.410071 |
1985-11-01 | 182.0 | 174.688055 | 173.191756 | 147.999237 |
1985-12-01 | 190.0 | 189.728744 | 187.649575 | 147.589541 |
1986-01-01 | 161.2 | 155.077205 | 154.817215 | 147.180980 |
1986-02-01 | 155.5 | 148.054292 | 147.477692 | 146.773549 |
performance
Target | HWAMS | HWAAS | TBAT | |
---|---|---|---|---|
rmse | 0.000000 | 17.599400 | 18.993827 | 36.538009 |
mse | 0.000000 | 309.738878 | 360.765452 | 1335.026136 |
mean | 155.293277 | 142.399639 | 140.577496 | 126.590412 |
forecast_out = am.forecast_outsample(); forecast_out
HWAMS | HWAAS | TBAT | |
---|---|---|---|
Date | |||
1995-09-01 | 137.518755 | 137.133938 | 142.906275 |
1995-10-01 | 164.136220 | 165.079612 | 142.865575 |
1995-11-01 | 178.671684 | 180.009560 | 142.827110 |
1995-12-01 | 184.175954 | 185.715043 | 142.790757 |
1996-01-01 | 147.166448 | 147.440026 | 142.756399 |
all_ensemble_in, all_ensemble_out, all_performance = am.ensemble(forecast_in, forecast_out)
all_performance
rmse | mse | mean | |
---|---|---|---|
ensemble_lgb__X__HWAMS | 9.697588 | 94.043213 | 146.719412 |
ensemble_lgb__X__HWAMS__X__HWAMS_HWAAS__X__ensemble_ts__X__HWAAS | 9.875212 | 97.519817 | 145.250837 |
ensemble_lgb__X__HWAMS__X__HWAMS_HWAAS | 11.127326 | 123.817378 | 142.994374 |
ensemble_lgb | 12.748526 | 162.524907 | 156.487208 |
ensemble_lgb__X__HWAMS__X__HWAMS_HWAAS__X__ensemble_ts__X__HWAAS__X__HWAMS_HWAAS_TBAT__X__TBAT | 14.589155 | 212.843442 | 138.615567 |
HWAMS | 15.567905 | 242.359663 | 136.951615 |
HWAMS_HWAAS | 16.651370 | 277.268110 | 135.544299 |
ensemble_ts | 17.255107 | 297.738716 | 163.134079 |
HWAAS | 17.804066 | 316.984751 | 134.136983 |
HWAMS_HWAAS_TBAT | 23.358758 | 545.631579 | 128.785846 |
TBAT | 39.003864 | 1521.301380 | 115.268940 |
all_ensemble_in[["Target","ensemble_lgb__X__HWAMS","HWAMS","HWAAS"]].plot()
all_ensemble_out[["ensemble_lgb__X__HWAMS","HWAMS","HWAAS"]].plot()
am.models_dict_in
{'HWAAS': <statsmodels.tsa.holtwinters.HoltWintersResultsWrapper at 0x7f42f7822d30>,
'HWAMS': <statsmodels.tsa.holtwinters.HoltWintersResultsWrapper at 0x7f42f77fff60>,
'TBAT': <tbats.tbats.Model.Model at 0x7f42d3aab048>}
am.models_dict_out
{'HWAAS': <statsmodels.tsa.holtwinters.HoltWintersResultsWrapper at 0x7f9c01309278>,
'HWAMS': <statsmodels.tsa.holtwinters.HoltWintersResultsWrapper at 0x7f9c01309cf8>,
'TBAT': <tbats.tbats.Model.Model at 0x7f9c08f18ba8>}
Follow this link if you want to run the package in the cloud.
If you use AtsPy in your research, please consider citing it. I have also written a small report that can be found on SSRN.
BibTeX entry:
@software{atspy,
title = {{AtsPy}: Automated Time Series Models in Python.},
author = {Snow, Derek},
url = {https://github.com/firmai/atspy/},
version = {1.15},
date = {2020-02-17},
}
@misc{atspy,
author = {Snow, Derek},
title = {{AtsPy}: Automated Time Series Models in Python (1.15).},
year = {2020},
url = {https://github.com/firmai/atspy/},
}
Author: Firmai
Source Code: https://github.com/firmai/atspy
1639222860
In this video, you will learn how to perform Geolocation testing using xUnit.
It is Part VII of the LambdaTest xUnit Tutorial series. In this video, Anton Angelov (@angelovstanton) explains Geolocation testing using xUnit with practical implementation. If you build consumer web products for different audiences, geolocation testing becomes necessary because a web application or a website may behave differently if viewed from different locations.
Geolocation browser testing helps you give a uniform experience to users irrespective of their location.
This video answers 🚩
◼ How do you do geolocation?
◼ How do you test geolocation in Chrome?
◼ How do you test for geofencing?
Vɪᴅᴇᴏ Cʜᴀᴘᴛᴇʀꜱ 🔰
00:00 - Introduction
01:00 - Session starts
02:50 - What is Geolocation testing?
04:42 - Performing Geolocation testing on LambdaTest platform overcloud
55:10 - Conclusion of the session
Start FREE testing -: https://accounts.lambdatest.com/register?utm_source=YouTube&utm_medium=YTChannel&utm_campaign=Video&utm_term=gOgAQfYYcqk
1639163400
This video explains how to write and run your first test cases in mocha.
It is Part IV of the JavaScript Test Automation LambdaTest Tutorial series. In this video, Ryan Howard (@ryantestsstuff), an engineer, explains how we can use mocha JS, and run tests in mocha. You will also gain insights into how mocha testing works?
This video answers 🚩
◼ How do you write test cases in mocha?
◼ How do I run a specific test in mocha?
◼ How do you test a mocha function?
◼ How do you write unit test cases with mocha?
◼ How do you assert in mocha?
◼ Do mocha tests run in order?
Vɪᴅᴇᴏ Cʜᴀᴘᴛᴇʀꜱ 🔰
➤ 00:00 Introduction
➤ 01:03 About Mocha Test Framework
➤ 03:19 How to add mocha to your test?
➤ 09:58 Run your first test using Mocha
Learn more-: https://accounts.lambdatest.com/register?utm_source=YouTube&utm_medium=YTChannel&utm_campaign=Video&utm_term=hUDQOcabs0Y
1639156020
In this video, you will learn how to write parameterized tests in xUnit Selenium C#.
It is Part IV of the LambdaTest xUnit.NET core tutorial series. In this video, Anton Angelov (@angelovstanton) explains the use of xUnit Selenium C Sharp with the help of examples showcasing how to write parameterized tests in xUnit Selenium C#.
This video answers 🚩
◼ What is the use of xUnit?
◼ How do you write xUnit test cases?
◼ Can I use xUnit for .NET framework?
Vɪᴅᴇᴏ Cʜᴀᴘᴛᴇʀꜱ 🔰
00:00 - Introduction
00:58 - xUnit tutorial using Selenium C# begins
01:02 - Course modules
01:42 - About parameterized tests in xUnit using Selenium
03:15 - Practical begins - writing tests in xUnit Selenium C#
27:07 - Conclusion of the session
Start FREE testing -: https://accounts.lambdatest.com/register?utm_source=v%3DTkybFNn7GLY&utm_medium=YTChannel&utm_campaign=Video
1639148644
In this video, learn what is Assertion in Selenium JavaScript? How and when do we use them?
It is Part III of the LambdaTest JavaScript Test Automation Tutorial series. In this video, Ryan Howard (@ryantestsstuff), an engineer, explains 'assertions, their types, and their applicability in detail showcasing practically, what if the test(s) fails while performing selenium automation testing? This video answers how one can handle major errors/issues using 'Assertions'.
Vɪᴅᴇᴏ Cʜᴀᴘᴛᴇʀꜱ 👇
➤ 00:00 Introduction to JavaScript Testing Tutorial for beginners
➤ 01:00 What are Assertions in Selenium JavaScript
➤ 02:05 How to use Assertions in Selenium JavaScript using Node Assertion Library
➤ 12:29 How to use Assertions in Selenium JavaScript using Chai
Video also answers 🚩
------------💨
What are the different methods of assert?
How to 'assert' and 'verify' in JavaScript Selenium.
How do you do Assertion in Selenium?
How do you use assertions in Testing?
What are the different assertions in Testing?
What are the different methods of assert?
What is the use of Assertion in Selenium?
Know More, Visit: https://accounts.lambdatest.com/register?utm_source=YouTube&utm_medium=YTChannel&utm_campaign=Video&utm_term=JQGETyIx_O4
1639133820
This video will explain how to write the first Selenium Test in JavaScript. Explore more-: https://accounts.lambdatest.com/register?utm_source=YouTube&utm_medium=YouTubeChannel&utm_campaign=Videos&utm_term=w4cidssAdJg
This is Part II of the JavaScript Test Automation LambdaTest Tutorial series. Ryan Howard (@ryantestsstuff), a seasoned expert in Selenium & JavaScript, explains how to create & execute the test cases on the local Selenium Grid. He also demonstrates how to use Selenium Web Locators to find the required WebElement on the page and perform actions on the same.
This video answers 🌐-:
💠How do you write the first test case in Selenium with JavaScript?
💠Can I use Selenium for JavaScript?
💠How do I start testing with Selenium with JavaScript?
💠How do I run a test script in Selenium?
Vɪᴅᴇᴏ Cᴏɴᴛᴇɴᴛ Cʜᴀᴘᴛᴇʀꜱ 🔰
----------------------◾
00:00 - Introduction to JavaScript Testing Tutorial for beginners
01:19 - Selecting an IDE to write your first Selenium test in JavaScript
01:50 - Writing your first Selenium test in JavaScript
10:48 - How to use Selenium Web Locators when writing tests in JavaScript
16:30 - How to run your first test implemented in Selenium JavaScript
18:05 - How to write and execute your first test in Selenium JavaScript in 2 minutes
In the above Test Automation In JavaScript Tutorial Series, Ryan Howard explains the fundamentals of JavaScript and its role in Selenium Test Automation, with practical examples. It will cover everything from getting set up to building test cases on Selenium Webdriver using JavaScript. Also, using Lambda Test Platform potentials running tests across multiple browsers /versions over the cloud.
----------------------◾
GitHub repo for JavaScript Selenium Automation Testing: https://github.com/LambdaTest/javascript
1637202480
In this video, we are going to cover the types of automation framework in selenium.
We are discussing, Types of Automated Testing Frameworks
✅Modular Based Testing Framework.
✅Data-Driven Framework.
✅Keyword-Driven Framework.
✅Hybrid Testing Framework.
✅ What is Data Driven Testing?
Data Driven Testing is a software testing method in which test data is stored in table or spreadsheet format. Data driven testing allows testers to input a single test script that can execute tests for all test data from a table and expect the test output in the same table
✅What is Keyword Driven Framework?
Keyword Driven Framework is a functional automation testing framework that divides test cases into four different parts in order to separate coding from test cases and test steps for better automation.
#automated #testautomation #selenium