Nat  Grady

Nat Grady

1670445960

8 Best Practices Automated Testing for a Positive Testing Experience

It’s no wonder many developers view testing as a necessary evil that saps time and energy: Testing can be tedious, unproductive, and entirely too complicated.

My first experience with testing was awful. I worked on a team that had strict code coverage requirements. The workflow was: implement a feature, debug it, and write tests to ensure full code coverage. The team didn’t have integration tests, only unit tests with tons of manually initialized mocks, and most unit tests tested trivial manual mappings while using a library to perform automatic mappings. Every test tried to assert every available property, so every change broke dozens of tests.

I disliked working with tests because they were perceived as a time-consuming burden. However, I didn’t give up. The confidence testing provides and the automation of checks after every small change piqued my interest. I started reading and practicing, and learned that tests, when done right, could be both helpful and enjoyable.

In this article, I share eight automated testing best practices I wish I had known from the beginning.

Why You Need an Automated Test Strategy

Automated testing is often focused on the future, but when you implement it correctly, you benefit immediately. Using tools that help you do your job better can save time and make your work more enjoyable.

Imagine you’re developing a system that retrieves purchase orders from the company’s ERP and places those orders with a vendor. You have the price of previously ordered items in the ERP, but the current prices may be different. You want to control whether to place an order at a lower or higher price. You have user preferences stored, and you’re writing code to handle price fluctuations.

How would you check that the code works as expected? You would probably:

  1. Create a dummy order in the developer’s instance of the ERP (assuming you set it up beforehand).
  2. Run your app.
  3. Select that order and start the order-placing process.
  4. Gather data from the ERP’s database.
  5. Request the current prices from the vendor’s API.
  6. Override prices in code to create specific conditions.

You stopped at the breakpoint and can go step by step to see what will happen for one scenario, but there are many possible scenarios:

PreferencesERP priceVendor priceShould we place the order?
Allow higher priceAllow lower price
falsefalse1010true
(Here there would be three more preference combinations, but prices are equal, so the result is the same.)
truefalse1011true
truefalse109false
falsetrue1011false
falsetrue109true
truetrue1011true
truetrue109true

In case of a bug, the company may lose money, harm its reputation, or both. You need to check multiple scenarios and repeat the check loop several times. Doing so manually would be tedious. But tests are here to help!

Tests let you create any context without calls to unstable APIs. They eliminate the need for repetitive clicking through old and slow interfaces that are all too common in legacy ERP systems. All you have to do is define the context for the unit or subsystem and then any debugging, troubleshooting, or scenario exploring happens instantly—you run the test and you are back to your code. My preference is to set up a keybinding in my IDE that repeats my previous test run, giving immediate, automated feedback as I make changes.

1. Maintain the Right Attitude

Compared to manual debugging and self-testing, automated tests are more productive from the very beginning, even before any testing code is committed. After you check that your code behaves as expected—by manually testing or perhaps, for a more complex module, by stepping through it with a debugger during testing—you can use assertions to define what you expect for any combination of input parameters.

With tests passing, you’re almost ready to commit, but not quite. Prepare to refactor your code because the first working version usually isn’t elegant. Would you perform that refactoring without tests? That’s questionable because you’d have to complete all the manual steps again, which could diminish your enthusiasm.

What about the future? While performing any refactoring, optimization, or feature addition, tests help ensure that a module still behaves as expected after you change it, thereby instilling lasting confidence and allowing developers to feel better equipped to tackle upcoming work.

It’s counterproductive to think about tests as a burden or something that makes only code reviewers or leads happy. Tests are a tool that we as developers benefit from. We like when our code works and we don’t like to spend time on repetitive actions or on fixing code to address bugs.

Recently, I worked on refactoring in my codebase and asked my IDE to clean up unused using directives. To my surprise, tests showed several failures in my email reporting system. However, it was a valid fail—the cleanup process removed some using directives in my Razor (HTML + C#) code for an email template, and the template engine was not able to build valid HTML as a result. I didn’t expect that such a minor operation would break email reporting. Testing helped me avoid spending hours catching bugs all over the app right before its release, when I assumed that everything would work.

Of course, you have to know how to use tools and not cut your proverbial fingers. It might seem that defining the context is tedious and can be harder than running the app, that tests require too much maintenance to avoid becoming stale and useless. These are valid points and we will address them.

2. Select the Right Type of Test

Developers often grow to dislike automated tests because they are trying to mock a dozen dependencies only to check if they’re called by the code. Alternatively, developers encounter a high-level test and try to reproduce every application state to check all variations in a small module. These patterns are unproductive and tedious, but we can avoid them by leveraging different test types as they were intended. (Tests should be practical and enjoyable, after all!)

Readers will need to know what unit tests are and how to write them, and be familiar with integration tests—if not, it’s worth pausing here to get up to speed.

There are dozens of testing types, but these five common types make an extremely effective combination:

 

A set of basic illustrations depicting unit tests, integration tests, functional tests, canary tests, and load tests.

Five Common Types of Tests

 

  • Unit tests are used to test an isolated module by calling its methods directly. Dependencies are not being tested, thus, they’re mocked.
  • Integration tests are used to test subsystems. You still use direct calls to the module’s own methods, but here we care about dependencies, so don’t use mocked dependencies—only real (production) dependent modules. You can still use an in-memory database or mocked web server because these are mocks of infrastructure.
  • Functional tests are tests for the whole application, also known as end-to-end (E2E) tests. You use no direct calls. Instead, all the interaction goes through the API or user interface—these are the tests from the end-user perspective. However, infrastructure is still mocked.
  • Canary tests are similar to functional tests but with production infrastructure and a smaller set of actions. They’re used to ensure that newly deployed applications work.
  • Load tests are similar to canary tests but with real staging infrastructure and an even smaller set of actions, which are repeated many times.

It’s not always necessary to work with all five testing types from the beginning. In most cases, you can go a long way with the first three tests.

We’ll briefly examine the use cases of each type to help you select the right ones for your needs.

Unit Tests

Recall the example with different prices and handling preferences. It’s a good candidate for unit testing because we care only about what is happening inside the module, and the results have important business ramifications.

The module has a lot of different combinations of input parameters, and we want to get a valid return value for every combination of valid arguments. Unit tests are good at ensuring validity because they provide direct access to the input parameters of the function or method and you don’t have to write dozens of test methods to cover every combination. In many languages, you can avoid duplicating test methods by defining a method, which accepts arguments needed for your code and expected results. Then, you can use your test tooling to provide different sets of values and expectations for that parameterized method.

Integration Tests

Integration tests are a good fit for cases when you are interested in how a module interacts with its dependencies, other modules, or the infrastructure. You still use direct method calls but there’s no access to submodules, so trying to test all scenarios for all input methods of all submodules is impractical.

Typically, I prefer to have one success scenario and one failure scenario per module.

I like to use integration tests to check if a dependency injection container is built successfully, whether a processing or calculation pipeline returns the expected result, or whether complex data was read and converted correctly from a database or third-party API.

Functional or E2E Tests

These tests give you the most confidence that your app works because they verify that your app can at least start without a runtime error. It’s a little more work to start testing your code without direct access to its classes, but once you understand and write the first few tests, you’ll find it’s not too difficult.

Run the application by starting a process with command-line arguments, if needed, and then use the application as your prospective customer would: by calling API endpoints or pressing buttons. This is not difficult, even in the case of UI testing: Each major platform has a tool to find a visual element in a UI.

Canary Tests

Functional tests let you know if your app works in a testing environment but what about a production environment? Suppose you’re working with several third-party APIs and you want to have a dashboard of their states or want to see how your application handles incoming requests. These are common use cases for canary tests.

They operate by briefly acting on the working system without causing side effects to third-party systems. For example, you can register a new user or check product availability without placing an order.

The purpose of canary tests is to be sure that all major components are working together in a production environment, not failing because of, for example, credential issues.

Load Tests

Load tests reveal whether your application will continue to work when large numbers of people start using it. They’re similar to canary and functional tests but aren’t conducted in local or production environments. Usually, a special staging environment is used, which is similar to the production environment.

It’s important to note that these tests do not use real third-party services, which might be unhappy with external load testing of their production services and may charge extra as a result.

3. Keep Testing Types Separate

When devising your automated test plan, each type of test should be separated so as to be able to run independently. While this requires extra organization, it is worthwhile because mixing tests can create problems.

These tests have different:

  • Intentions and basic concepts (so separating them sets a good precedent for the next person looking at the code, including “future you”).
  • Execution times (so running unit tests first allows for a quicker test cycle when a test fails).
  • Dependencies (so it’s more efficient to load only those needed within a testing type).
  • Required infrastructures.
  • Programming languages (in certain cases).
  • Positions in the continuous integration (CI) pipeline or outside it.

It’s important to note that with most languages and tech stacks, you can group, for example, all unit tests together with subfolders named after functional modules. This is convenient, reduces friction when creating new functional modules, is easier for automated builds, results in less clutter, and is one more way to simplify testing.

4. Run Your Tests Automatically

Imagine a situation in which you’ve written some tests, but after pulling your repo a few weeks later, you notice those tests are no longer passing.

This is an unpleasant reminder that tests are code and, like any other piece of code, they need to be maintained. The best time for this is right before the moment you think you’ve finished your work and want to see if everything still operates as intended. You have all the context needed and you can fix the code or change the failing tests more easily than your colleague working on a different subsystem. But this moment only exists in your mind, so the most common way to run tests is automatically after a push to the development branch or after creating a pull request.

This way, your main branch will always be in a valid state, or you will, at least, have a clear indication of its state. An automated building and testing pipeline—or a CI pipeline—helps:

  • Ensure code is buildable.
  • Eliminate potential “It works on my machine” problems.
  • Provide runnable instructions on how to prepare a development environment.

Configuring this pipeline takes time, but the pipeline can reveal a range of issues before they reach users or clients, even when you’re the sole developer.

Once running, CI also reveals new issues before they have a chance to grow in scope. As such, I prefer to set it up right after writing the first test. You can host your code in a private repository on GitHub and set up GitHub Actions. If your repo is public, you have even more options than GitHub Actions. For instance, my automated test plan runs on AppVeyor, for a project with a database and three types of tests.

I prefer to structure my pipeline for production projects as follows:

  1. Compilation or transpilation
  2. Unit tests: they’re fast and don’t require dependencies
  3. Setup and initialization of the database or other services
  4. Integration tests: they have dependencies outside of your code, but they’re faster than functional tests
  5. Functional tests: when other steps have completed successfully, run the whole app

There are no canary tests or load tests. Because of their specifics and requirements, they should be initiated manually.

5. Write Only Necessary Tests

Writing unit tests for all code is a common strategy, but sometimes this wastes time and energy, and doesn’t give you any confidence. If you’re familiar with the “testing pyramid” concept, you may think that all of your code must be covered with unit tests, with only a subset covered by other, higher-level tests.

I don’t see any need to write a unit test that ensures that several mocked dependencies are called in the desired order. Doing that requires setting up several mocks and verifying all the calls, but it still would not give me the confidence that the module is working. Usually, I only write an integration test that uses real dependencies and checks only the result; that gives me some confidence that the pipeline in the tested module is working properly.

In general, I write tests that make my life easier while implementing functionality and supporting it later.

For most applications, aiming for 100% code coverage adds a great deal of tedious work and eliminates the joy from working with tests and programming in general. As Martin Fowler’s Test Coverage puts it:

Test coverage is a useful tool for finding untested parts of a codebase. Test coverage is of little use as a numeric statement of how good your tests are.

Thus I recommend you install and run the coverage analyzer after writing some tests. The report with highlighted lines of code will help you better understand its execution paths and find uncovered places that should be covered. Also, looking at your getters, setters, and facades, you’ll see why 100% coverage is no fun.

6. Play Lego

From time to time, I see questions like, “How can I test private methods?” You don’t. If you’ve asked that question, something has already gone wrong. Usually, it means you violated the Single Responsibility Principle, and your module doesn’t do something properly.

Refactor this module and pull the logic you think is important into a separate module. There’s no problem with increasing the number of files, which will lead to the code structured as Lego bricks: very readable, maintainable, replaceable, and testable.

 

On the left there's a stack of rectangles. The topmost one is labeled OrderProcessor and some of the ones beneath it are labeled Access Order Data, Price Check, and Place Order. An arrow points from the left-hand stack to the right, where OrderProcessor is a sideways Lego brick, with bricks in various stages of being attached and detached from it, including OrderDataProvider, PriceChecker, and OrderPlacer.

Refactoring a module to resemble Lego bricks.

 

Properly structuring code is easier said than done. Here are two suggestions:

Functional Programming

It’s worth learning about the principles and ideas of functional programming. Most mainstream languages, like C, C++, C#, Java, Assembly, JavaScript, and Python, force you to write programs for machines. Functional programming is better suited to the human brain.

This may seem counterintuitive at first, but consider this: A computer will be fine if you put all of your code in a single method, use a shared memory chunk to store temporary values, and use a fair amount of jump instructions. Moreover, compilers in the optimization stage sometimes do this. However, the human brain doesn’t easily handle this approach.

Functional programming forces you to write pure functions without side effects, with strong types, in an expressive manner. That way it’s much easier to reason about a function because the only thing it produces is its return value. The Programming Throwdown podcast episode Functional Programming With Adam Gordon Bell will help you to gain a basic understanding, and you can continue with the Corecursive episodes God’s Programming Language With Philip Wadler and Category Theory With Bartosz Milewski. The last two greatly enriched my perception of programming.

Test-driven Development

I recommend mastering TDD. The best way to learn is to practice. String Calculator Kata is a great way to practice with code kata. Mastering the kata will take time but will ultimately allow you to fully absorb the idea of TDD, which will help you create well-structured code that is a delight to work with and also testable.

One note of caution: Sometimes you’ll see TDD purists claiming that TDD is the only right way to program. In my opinion, it is simply another useful tool in your toolbox, nothing more.

Sometimes, you need to see how to adjust modules and processes in relation to each other and don’t know what data and signatures to use. In such cases, write code until it compiles, and then write tests to troubleshoot and debug the functionality.

In other cases, you know the input and the output you want, but have no idea how to write the implementation properly because of complicated logic. For those cases, it’s easier to start following the TDD procedure and build your code step by step rather than spend time thinking about the perfect implementation.

7. Keep Tests Simple and Focused

It’s a pleasure to work in a neatly organized code environment without unnecessary distractions. That’s why it’s important to apply SOLID, KISS, and DRY principles to tests—utilizing refactoring when it’s needed.

Sometimes I hear comments like, “I hate working in a heavily tested codebase because every change requires me to fix dozens of tests.” That’s a high-maintenance problem caused by tests that aren’t focused and try to test too much. The principle of “Do one thing well” applies to tests too: “Test one thing well”; each test should be relatively short and test only one concept. “Test one thing well” doesn’t mean that you should be limited to one assertion per test: You can use dozens if you’re testing non-trivial and important data mapping.

This focus is not limited to one specific test or type of test. Imagine dealing with complicated logic that you tested using unit tests, such as mapping data from the ERP system to your structure, and you have an integration test that is accessing mock ERP APIs and returning the result. In that case, it’s important to remember what your unit test already covers so you don’t test the mapping again in integration tests. Usually, it’s enough to ensure the result has the correct identification field.

With code structured like Lego bricks and focused tests, changes to business logic should not be painful. If changes are radical, you simply drop the file and its related tests, and make a new implementation with new tests. In case of minor changes, you typically change one to three tests to meet the new requirements and make changes to the logic. It’s fine to change tests; you can think about this practice as double-entry bookkeeping.

Other ways to achieve simplicity include:

  • Coming up with conventions for test file structuring, test content structuring (typically an Arrange-Act-Assert structure), and test naming; then, most importantly, following these rules consistently.
  • Extracting big code blocks to methods like “prepare request” and making helper functions for repeated actions.
  • Applying the builder pattern for test data configuration.
  • Using (in integration tests) the same DI container you use in the main app so every instantiation will be as trivial as TestServices.Get() without manually creating dependencies. That way it will be easy to read, maintain, and write new tests because you already have useful helpers in place.

If you feel a test is becoming too complicated, simply stop and think. Either the module or your test needs to be refactored.

8. Use Tools to Make Your Life Easier

You will face many tedious tasks while testing. For example, setting up test environments or data objects, configuring stubs and mocks for dependencies, and so on. Luckily, every mature tech stack contains several tools to make these tasks much less tedious.

I suggest you write your first hundred tests if you haven’t already, then invest some time to identify repetitive tasks and learn about testing-related tooling for your tech stack.

For inspiration, here are some tools you can use:

  • Test runners. Look for concise syntax and ease of use. From my experience, for .NET, I recommend xUnit (though NUnit is a solid choice too). For JavaScript or TypeScript, I go with Jest. Try to find the best match for your tasks and mindset because tools and challenges evolve.
  • Mocking libraries. There may be low-level mocks for code dependencies, like interfaces, but there are also higher-level mocks for web APIs or databases. For JavaScript and TypeScript, low-level mocks included in Jest are OK. For .NET. I use Moq, though NSubstitute is great too. As for web API mocks, I enjoy using WireMock.NET. It can be used instead of an API to troubleshoot and debug response handling. It’s also very reliable and fast in automated tests. Databases could be mocked using their in-memory counterparts. EfCore in .NET provides such an option.
  • Data generation libraries. These utilities fill your data objects with random data. They’re useful when, for example, you only care about a couple of fields from a big data transfer object (if that; maybe you only want to test mapping correctness). You can use them for tests and also as random data to display on a form or to fill your database. For testing purposes, I use AutoFixture in .NET.
  • UI automation libraries. These are automated users for automated tests: They can run your app, fill out forms, click on buttons, read labels, and so on. To navigate through all of the elements of your app, you don’t need to deal with clicking by coordinates or image recognition; major platforms have the tooling to find needed elements by type, identifier, or data so you don’t need to change your tests with every redesign. They are robust, so once you’ve made them work for you and CI (sometimes you find out that things work only on your machine), they’ll keep working. I enjoy using FlaUI for .NET and Cypress for JavaScript and TypeScript.
  • Assertion libraries. Most test runners include assertion tools, but there are cases in which an independent tool can help you write complex assertions using cleaner and more readable syntax, like Fluent Assertions for .NET. I especially like the function to assert that collections are equal regardless of an item’s order or its address in memory.

May the Flow Be With You

Happiness is tightly coupled with the so-called “flow” experience described in detail in the book Flow: The Psychology of Optimal Experience. To achieve that flow experience, you must be engaged in an activity with a clear set of goals and be able to see your progress. Tasks should result in immediate feedback, for which automated tests are ideal. You also need to strike a balance between challenges and skills, which is up to every individual. Tests, particularly when approached with TDD, can help guide you and instill confidence. They help you to set specific goals, with each passed test being an indicator of your progress.

The right approach to testing can make you happier and more productive, and tests decrease the chances of burnout. The key is to view testing as a tool (or toolset) that can help you in your daily development routine, not as a burdensome step for future-proofing your code.

Testing is a necessary part of programming that allows software engineers to improve the way they work, deliver the best results, and use their time optimally. Perhaps even more importantly, tests can help developers enjoy their work more, thus boosting their morale and motivation.

Original article source at: https://www.toptal.com/

#testing #automated 

8 Best Practices Automated Testing for a Positive Testing Experience

ReverseDiffSource.jl: Reverse Automated Differentiation From Source

ReverseDiffSource.jl

Reverse automated differentiation from an expression or a function

This package provides a function rdiff() that generates valid Julia code for the calculation of derivatives up to any order for a user supplied expression or generic function. Install with Pkg.add("ReverseDiffSource"). Package documentation and examples can be found here.

This version of automated differentiation operates at the source level (provided either in an expression or a generic function) to output Julia code calculating the derivatives (in a expression or a function respectively). Compared to other automated differentiation methods it does not rely on method overloading or new types and should, in principle, produce fast code.

Usage examples:

  • derivative of x³
    julia> rdiff( :(x^3) , x=Float64)  # 'x=Float64' indicates the type of x to rdiff
    :(begin
        (x^3,3 * x^2.0)  # expression calculates a tuple of (value, derivate)
        end)
  • first 10 derivatives of sin(x) (notice the simplifications)
    julia> rdiff( :(sin(x)) , order=10, x=Float64)  # derivatives up to order 10
    :(begin
            _tmp1 = sin(x)
            _tmp2 = cos(x)
            _tmp3 = -_tmp1
            _tmp4 = -_tmp2
            _tmp5 = -_tmp3
            (_tmp1,_tmp2,_tmp3,_tmp4,_tmp5,_tmp2,_tmp3,_tmp4,_tmp5,_tmp2,_tmp3)
        end)
  • works on functions too
	julia> rosenbrock(x) = (1 - x[1])^2 + 100(x[2] - x[1]^2)^2   # function to be derived
	julia> rosen2 = rdiff(rosenbrock, (Vector{Float64},), order=2)       # orders up to 2
		(anonymous function)
  • gradient calculation of a 3 hidden layer neural network for backpropagation
    # w1-w3 are the hidden layer weight matrices, x1 the input vector
    function ann(w1, w2, w3, x1)
        x2 = w1 * x1
        x2 = log(1. + exp(x2))   # soft RELU unit
        x3 = w2 * x2
        x3 = log(1. + exp(x3))   # soft RELU unit
        x4 = w3 * x3
        1. / (1. + exp(-x4[1]))  # sigmoid output
    end

    w1, w2, w3 = randn(10,10), randn(10,10), randn(1,10)
    x1 = randn(10)
    dann = m.rdiff(ann, (Matrix{Float64}, Matrix{Float64}, Matrix{Float64}, Vector{Float64}))
    dann(w1, w2, w3, x1) # network output + gradient on w1, w2, w3 and x1

Download Details:

Author: JuliaAttic
Source Code: https://github.com/JuliaAttic/ReverseDiffSource.jl 
License: MIT license

#julia #source #automated 

ReverseDiffSource.jl: Reverse Automated Differentiation From Source

10 Scripts De Automatización De Python Para Problemas Cotidianos

Todos los días necesitamos algunas herramientas de automatización para resolver nuestras tareas diarias e incluso necesitamos ayuda de automatización en nuestros Proyectos. En este artículo, conocerá 10 scripts de automatización de Python que resolverán sus problemas cotidianos. Haz un marcador para este artículo y déjalo ir.

O eres el que crea la automatización o estás siendo automatizado.

—Tom Preston-Werner

👉 Edición de fotos

Edite sus fotos con este asombroso script de automatización que utiliza el módulo Pillow . A continuación, hice una lista de funciones de edición de imágenes que puede usar en su proyecto de Python o resolver cualquier problema de la vida diaria.

Este script es un puñado de códigos de fragmentos para programadores que necesitan editar sus imágenes mediante programación.

# Photo Editing 
# pip install pillowfrom PIL import Image, ImageFilter# Resize an image
img = Image.open('img.jpg')
resize = img.resize((200, 300))
resize.save('output.jpg')# Blur Image
img = Image.open('img.jpg')
blur = img.filter(ImageFilter.BLUR)
blur.save('output.jpg')# Sharp Image
img = Image.open('img.jpg')
sharp = img.filter(ImageFilter.SHARPEN)
sharp.save('output.jpg')# Crop Image
img = Image.open('img.jpg')
crop = img.crop((0, 0, 50, 50))
crop.save('output.jpg')# Rotate Image
img = Image.open('img.jpg')
rotate = img.rotate(90)
rotate.save('output.jpg')# Flip Image
img = Image.open('img.jpg')
flip = img.transpose(Image.FLIP_LEFT_RIGHT)
flip.save('output.jpg')# Transpose Image
img = Image.open('img.jpg')
transpose = img.transpose(Image.TRANSPOSE)
transpose.save('output.jpg')# Convert Image to GreyScale
img = Image.open('img.jpg')
convert = img.convert('L')
convert.save('output.jpg')

👉 Marcador de agua PDF

Este script de automatización simplemente lo ayudará a marcar con agua su PDF página por página. Este script usa el módulo PyPDF4 para leer y agregar marcas de agua. Echa un vistazo al código a continuación:

# Watermark PDF files
# pip install PyPDF4import PyPDF4def Watermark():
    pdf_file= "test.pdf"
    output_pdf= "output.pdf"
    watermark= "watermark.pdf"watermark_read = PyPDF4.PdfFileReader(watermark)
    watermark_page = watermark_read.getPage(0)
    pdf_reader = PyPDF4.PdfFileReader(pdf_file)
    pdf_writer = PyPDF4.PdfFileWriter()
    for page in range(pdf_reader.getNumPages()):page = pdf_reader.getPage(page)
        page.mergePage(watermark_page)
        pdf_writer.addPage(page)
    
    # writing output pdf file
    with open(output_pdf, 'wb') as pdf:
        pdf_writer.write(pdf)Watermark()

👉 Edición de videos

Ahora edite su video programáticamente con este script de automatización. Utiliza el módulo Moviepy para editar el video. El siguiente script es un código útil para recortar videos, agregar VFX y agregar audio a partes específicas del video. Puede explorar Moviepy más para obtener más funciones.

# Video Editing
# pip install moviepyfrom moviepy.editor import *# Triming the videoclip_1 = VideoFileClip("sample_video.mp4").subclip(40, 50)
clip_2 = VideoFileClip("sample_video.mp4").subclip(68, 91)final_clip = concatenate_videoclips([clip_1, clip_2])final_clip.write_videofile("output.mp4")# Adding VFXclip_1 = (VideoFileClip("sample_video.mp4").subclip(40, 50).fx(vfx.colorx, 1.2).fx(vfx.lum_contrast, 0, 30, 100))
clip_2 = (VideoFileClip("sample_video.mp4").subclip(68, 91).fx(vfx.invert_colors))final_clip = concatenate_videoclips([clip_1, clip_2])final_clip.write_videofile("output.mp4")# Add Audio to Videoclip = VideoFileClip("sample_video.mp4")# Add audio to only first 5 sec
clip = clip.subclip(0, 5)
audioclip = AudioFileClip("audio.mp3").subclip(0, 5)
videoclip = clip.set_audio(audioclip)final_clip.write_videofile("output.mp4")

👉 IA de voz a texto

Viste mi código sobre la conversión de texto a voz , pero ¿sabes que también podemos convertir voz a texto en Python? Este increíble código te mostrará cómo hacerlo. Verifique el código a continuación:

# Convert Speech to Text
#pip install SpeechRecognitionimport speech_recognition as srdef SpeechToText():Ai = sr.Recognizer()
    with sr.Microphone() as source:
        listening = Ai.listen(source, phrase_time_limit = 6)  
    try:
        command = Ai.recognize_google(listening).lower()
        print("You said: " + command)
        
    except sr.UnknownValueError:
        print("Sorry Can't understand, Try again")
        SpeechToText()

👉 Solicitar API

Necesita llamar a una solicitud de API y luego pruebe el siguiente script. El script usa el módulo de solicitud Beautiful que puede obtener/publicar datos en cualquier llamada a la API. El siguiente código tiene dos partes, una es obtener el código fuente HTML y la segunda es iniciar sesión en el sitio.

# Request Api 
# pip install requestsimport requests# Get Dataheaders = {
    "Connection": "keep-alive",
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36"
}r = requests.get('https://api.example.com', headers=headers)
print(r.status_code) # 200
print(r.headers['content-type'])
print(r.content) # HTML Data# Login Site
payload = {'username': 'USERNAME', 'userpass': 'PASSWORD'}
r = requests.post('https://example.com/login', data=payload)
print(r.status_code) # 200

👉 GUI de Python

Este script le ayudará a crear programas Python de interfaz gráfica de usuario . Utiliza el último módulo PyQt6 y codifiqué la mayoría de los widgets importantes a continuación:

# Python GUI
# pip install PyQt6import sys
from PyQt6.QtWidgets import QApplication, QWidget, QPushButton, QMessageBox, QLabel, QLineEditdef Application():app = QApplication(sys.argv)
    win = QWidget()
    win.resize(300, 300)
    win.move(200, 200)
    win.setWindowTitle('Medium Article')# Create Buttons
    btn = QPushButton('Quit', win)
    
    # Message Box
    QMessageBox.question(win, 'Message',"Are you sure to quit?")# Label Text
    lbl = QLabel('Hello World', win)# Button Clicked
    btn.clicked.connect(lambda: QMessageBox.question(win, 'Message',"Are you sure to quit?"))# Entry Box
    entry = QLineEdit(win)win.show()
    sys.exit(app.exec())if __name__ == '__main__':
    Application()

👉 Corrector ortográfico

Tenga muchos documentos y texto enorme y si desea verificar la ortografía, este script de Python lo ayudará a resolver su problema. Utiliza el módulo Pyspellchecker para verificar la ortografía y dar sugerencias de corrección.

# Spell Checker in Python 
# pip install pyspellcheckerfrom spellchecker import SpellChecker as spellWords = spell.unknown(['Python'  , 'is' , 'a' , 'good' , 'lantyguage'])for w in Words:
    print(spell.correction(w)) #language
    print(spell.candidates(w)) #{ language }

👉 Corrector de gramática

Inspirado en Grammarly, ¿por qué no intentar crear su propio corrector gramatical en Python? El siguiente script lo ayudará a verificar su gramática, utiliza el módulo Gingerit, que es un módulo basado en API.

# Grammer Checker in Python
# pip install gingeritfrom gingerit.gingerit import GingerIttext = "Welcm Progammer to Python"Grammer = GingerIt()
Correction = Grammer.parse(text)print(Correction["result"]) # Welcome, Programmer to Python
print(Correction['corrections'])

👉 Automatice Win, Mac y Linux

Tenemos aplicaciones web y teléfonos inteligentes automatizados , ¿por qué no los sistemas operativos? Este script de automatización automatizará Win, Mac y Linux usando el módulo PyautoGui en Python. ¡Prueba el Código ahora!

# Automate Win, Mac and Linux
# pip install PyAutoGUIimport pyautogui as py# Mouse Movements
py.moveTo(100, 100)
py.moveTo(200, 200, duration=1)
py.click(100, 100)
py.doubleClick(200, 200)# Keyboard Inputs
py.write('Hello World!', interval=0.25)
py.press('enter')
py.hotkey('ctrl', 'c')
py.keyDown('shift')
py.keyUp('shift')# Screen Automation
img = py.screenshot('screenshot.jpg')
img.save('screenshot.jpg')loc = py.locationsOnScreen('icon.jpg')
print(loc)

👉 Leer Excel

Probablemente use Pandas para leer archivos CSV, pero ¿sabe que también puede leer archivos de Excel? Eche un vistazo al siguiente script para saber cómo funciona:

# Read Excel
# pip install pandasimport pandas as pddf = pd.read_excel('test.xlsx', sheet_name='Sheet1')# Read Columnsname = df['Name'].to_list()
Id  = df['Id'].to_list()print(name) # ["haider", "Dustin, "Tadashi"]
print(Id) # [245, 552, 892]

👉Pensamientos finales

Bueno, me alegro de que hayas llegado al final de este artículo y espero que hayas encontrado algo útil. Si te gusta este artículo, no olvides compartirlo ❤️ con tus amigos y presionar el Aplauso 👏 para recibir el aprecio de los programadores.

¡Feliz codificación!

Fuente: https://python.plainenglish.io/10-python-automation-scripts-for-everyday-problems-3ca0f2011282

#automated #python #script 

10 Scripts De Automatización De Python Para Problemas Cotidianos
藤本  結衣

藤本 結衣

1650464220

日常の問題に対応する10個のPython自動化スクリプト

毎日、日常のタスクを解決するための自動化ツールが必要であり、プロジェクトでの自動化の支援も必要です。この記事では、日常の問題を解決する10個のPython自動化スクリプトについて説明します。この記事のブックマークを作成して、手放します。

自動化を作成するのはあなたか、自動化されているのはあなたです。

—トム・プレストン・ウェルナー

👉写真編集

Pillowモジュールを使用するこの素晴らしい自動化スクリプトを使用して写真を編集します。以下に、Pythonプロジェクトで使用したり、日常生活の問題を解決したりできる画像編集機能のリストを作成しました。

このスクリプトは、画像をプログラムで編集する必要があるプログラマー向けのスニペットコードです。

# Photo Editing 
# pip install pillowfrom PIL import Image, ImageFilter# Resize an image
img = Image.open('img.jpg')
resize = img.resize((200, 300))
resize.save('output.jpg')# Blur Image
img = Image.open('img.jpg')
blur = img.filter(ImageFilter.BLUR)
blur.save('output.jpg')# Sharp Image
img = Image.open('img.jpg')
sharp = img.filter(ImageFilter.SHARPEN)
sharp.save('output.jpg')# Crop Image
img = Image.open('img.jpg')
crop = img.crop((0, 0, 50, 50))
crop.save('output.jpg')# Rotate Image
img = Image.open('img.jpg')
rotate = img.rotate(90)
rotate.save('output.jpg')# Flip Image
img = Image.open('img.jpg')
flip = img.transpose(Image.FLIP_LEFT_RIGHT)
flip.save('output.jpg')# Transpose Image
img = Image.open('img.jpg')
transpose = img.transpose(Image.TRANSPOSE)
transpose.save('output.jpg')# Convert Image to GreyScale
img = Image.open('img.jpg')
convert = img.convert('L')
convert.save('output.jpg')

👉PDFウォーターマーカー

この自動化スクリプトは、PDFをページごとに透かしするのに役立ちます。このスクリプトは、PyPDF4モジュールを使用して、透かしを読み取って追加します。以下のコードを確認してください。

# Watermark PDF files
# pip install PyPDF4import PyPDF4def Watermark():
    pdf_file= "test.pdf"
    output_pdf= "output.pdf"
    watermark= "watermark.pdf"watermark_read = PyPDF4.PdfFileReader(watermark)
    watermark_page = watermark_read.getPage(0)
    pdf_reader = PyPDF4.PdfFileReader(pdf_file)
    pdf_writer = PyPDF4.PdfFileWriter()
    for page in range(pdf_reader.getNumPages()):page = pdf_reader.getPage(page)
        page.mergePage(watermark_page)
        pdf_writer.addPage(page)
    
    # writing output pdf file
    with open(output_pdf, 'wb') as pdf:
        pdf_writer.write(pdf)Watermark()

👉ビデオ編集

次に、この自動化スクリプトを使用してビデオをプログラムで編集します。Moviepyモジュールを使用してビデオを編集します。以下のスクリプトは、ビデオをトリミングし、VFXを追加し、ビデオの特定の部分にオーディオを追加するための便利なコードです。さらなる機能については、 Moviepyをさらに詳しく調べることができます。

# Video Editing
# pip install moviepyfrom moviepy.editor import *# Triming the videoclip_1 = VideoFileClip("sample_video.mp4").subclip(40, 50)
clip_2 = VideoFileClip("sample_video.mp4").subclip(68, 91)final_clip = concatenate_videoclips([clip_1, clip_2])final_clip.write_videofile("output.mp4")# Adding VFXclip_1 = (VideoFileClip("sample_video.mp4").subclip(40, 50).fx(vfx.colorx, 1.2).fx(vfx.lum_contrast, 0, 30, 100))
clip_2 = (VideoFileClip("sample_video.mp4").subclip(68, 91).fx(vfx.invert_colors))final_clip = concatenate_videoclips([clip_1, clip_2])final_clip.write_videofile("output.mp4")# Add Audio to Videoclip = VideoFileClip("sample_video.mp4")# Add audio to only first 5 sec
clip = clip.subclip(0, 5)
audioclip = AudioFileClip("audio.mp3").subclip(0, 5)
videoclip = clip.set_audio(audioclip)final_clip.write_videofile("output.mp4")

👉スピーチからテキストAI

テキストから音声への変換に関する私のコードを見ましたが、Pythonでも音声をテキストに変換できることをご存知ですか。この素晴らしいコードは、それを行う方法を示します。以下のコードを確認してください。

# Convert Speech to Text
#pip install SpeechRecognitionimport speech_recognition as srdef SpeechToText():Ai = sr.Recognizer()
    with sr.Microphone() as source:
        listening = Ai.listen(source, phrase_time_limit = 6)  
    try:
        command = Ai.recognize_google(listening).lower()
        print("You said: " + command)
        
    except sr.UnknownValueError:
        print("Sorry Can't understand, Try again")
        SpeechToText()

👉リクエストAPI

APIリクエストを呼び出す必要があります。次に、以下のスクリプトを試してください。スクリプトは、任意のAPI呼び出しでデータを取得/投稿できるBeautifulリクエストモジュールを使用します。以下のコードには2つの部分があり、1つはHTMLソースコードの取得であり、もう1つはサイトへのログインです。

# Request Api 
# pip install requestsimport requests# Get Dataheaders = {
    "Connection": "keep-alive",
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36"
}r = requests.get('https://api.example.com', headers=headers)
print(r.status_code) # 200
print(r.headers['content-type'])
print(r.content) # HTML Data# Login Site
payload = {'username': 'USERNAME', 'userpass': 'PASSWORD'}
r = requests.post('https://example.com/login', data=payload)
print(r.status_code) # 200

👉PythonGUI

このスクリプトを使用すると、グラフィカルユーザーインターフェイスのPythonプログラムを作成できます。最新のPyQt6モジュールを使用しており、以下の重要なウィジェットのほとんどをコーディングしました。

# Python GUI
# pip install PyQt6import sys
from PyQt6.QtWidgets import QApplication, QWidget, QPushButton, QMessageBox, QLabel, QLineEditdef Application():app = QApplication(sys.argv)
    win = QWidget()
    win.resize(300, 300)
    win.move(200, 200)
    win.setWindowTitle('Medium Article')# Create Buttons
    btn = QPushButton('Quit', win)
    
    # Message Box
    QMessageBox.question(win, 'Message',"Are you sure to quit?")# Label Text
    lbl = QLabel('Hello World', win)# Button Clicked
    btn.clicked.connect(lambda: QMessageBox.question(win, 'Message',"Are you sure to quit?"))# Entry Box
    entry = QLineEdit(win)win.show()
    sys.exit(app.exec())if __name__ == '__main__':
    Application()

👉スペルチェッカー

たくさんのドキュメントと巨大なテキストがあり、スペルをチェックしたい場合は、このPythonスクリプトが問題の解決に役立ちます。Pyspellcheckerモジュールを使用して、呪文をチェックし、修正の提案をします。

# Spell Checker in Python 
# pip install pyspellcheckerfrom spellchecker import SpellChecker as spellWords = spell.unknown(['Python'  , 'is' , 'a' , 'good' , 'lantyguage'])for w in Words:
    print(spell.correction(w)) #language
    print(spell.candidates(w)) #{ language }

👉文法チェッカー

Grammarlyに触発されて、Pythonで独自の文法チェッカーを作成してみませんか。以下のスクリプトは、APIベースのモジュールであるGingeritモジュールを使用する文法を確認するのに役立ちます。

# Grammer Checker in Python
# pip install gingeritfrom gingerit.gingerit import GingerIttext = "Welcm Progammer to Python"Grammer = GingerIt()
Correction = Grammer.parse(text)print(Correction["result"]) # Welcome, Programmer to Python
print(Correction['corrections'])

👉Win、Mac、およびLinuxを自動化する

自動化されたウェブアプリとスマートフォンがありますが、オペレーティングシステムはどうでしょうか。この自動化スクリプトは、PythonのPyautoGuiモジュールを使用して、Win、Mac、およびLinuxを自動化します。今すぐコードをお試しください!

# Automate Win, Mac and Linux
# pip install PyAutoGUIimport pyautogui as py# Mouse Movements
py.moveTo(100, 100)
py.moveTo(200, 200, duration=1)
py.click(100, 100)
py.doubleClick(200, 200)# Keyboard Inputs
py.write('Hello World!', interval=0.25)
py.press('enter')
py.hotkey('ctrl', 'c')
py.keyDown('shift')
py.keyUp('shift')# Screen Automation
img = py.screenshot('screenshot.jpg')
img.save('screenshot.jpg')loc = py.locationsOnScreen('icon.jpg')
print(loc)

👉Excelを読む

あなたはおそらくCSVファイルを読むためにパンダを使用しますが、Excelファイルも読むことができることを知っていますか。次のスクリプトを見て、どのように機能するかを確認してください。

# Read Excel
# pip install pandasimport pandas as pddf = pd.read_excel('test.xlsx', sheet_name='Sheet1')# Read Columnsname = df['Name'].to_list()
Id  = df['Id'].to_list()print(name) # ["haider", "Dustin, "Tadashi"]
print(Id) # [245, 552, 892]

👉最終的な考え

さて、あなたがこの記事の終わりに到達したことをうれしく思います、そしてあなたが何か役に立つものを見つけたことを願っています。この記事が気に入ったら、❤️itを友達と共有してClap👏を押すのを忘れないでください。プログラマーから感謝の気持ちを伝えます。

ハッピーコーディング!

ソース:https ://python.plainenglish.io/10-python-automation-scripts-for-everyday-problems-3ca0f2011282

#automated #python #script

日常の問題に対応する10個のPython自動化スクリプト
Royce  Reinger

Royce Reinger

1649499000

Release-it: Automate Versioning and Package Publishing

Release It! 🚀

🚀 Generic CLI tool to automate versioning and package publishing related tasks:

Use release-it for version management and publish to anywhere with its versatile configuration, a powerful plugin system, and hooks to execute any command you need to test, build, and/or publish your project. 

Installation

Although release-it is a generic release tool, installation requires npm. To use release-it, a package.json file is not required. The recommended way to install release-it also adds basic configuration. Answer one or two questions and it's ready:

npm init release-it

Alternatively, install it manually, and add the release script to package.json:

npm install --save-dev release-it
{
  "name": "my-package",
  "version": "1.0.0",
  "scripts": {
    "release": "release-it"
  },
  "devDependencies": {
    "release-it": "*"
  }
}

Now you can run npm run release from the command line (any release-it arguments behind the --):

npm run release
npm run release -- minor --ci

Global usage

Use release-it in any (non-npm) project, take it for a test drive, or install it globally:

# Run release-it from anywhere (without installation)
npx release-it

# Install globally and run from anywhere
npm install --global release-it
release-it

Usage

Release a new version:

release-it

You will be prompted to select the new version, and more prompts will follow based on your setup.

Run release-it from the root of the project to prevent potential issues.

Dry Runs

Use --dry-run to show the interactivity and the commands it would execute.

→ See Dry Runs for more details.

To print the next version without releasing anything, add the --release-version flag.

Configuration

Out of the box, release-it has sane defaults, and plenty of options to configure it. Most projects use a .release-it.json in the project root, or a release-it property in package.json.

→ See Configuration for more details.

Here's a quick example .release-it.json:

{
  "git": {
    "commitMessage": "chore: release v${version}"
  },
  "github": {
    "release": true
  }
}

Interactive vs. CI mode

By default, release-it is interactive and allows you to confirm each task before execution:

By using the --ci option, the process is fully automated without prompts. The configured tasks will be executed as demonstrated in the first animation above. On a Continuous Integration (CI) environment, this non-interactive mode is activated automatically.

Use --only-version to use a prompt only to determine the version, and automate the rest.

Latest version

How does release-it determine the latest version?

  1. For projects with a package.json, its version will be used (see npm to skip this).
  2. Otherwise, release-it uses the latest Git tag to determine which version should be released.
  3. As a last resort, 0.0.0 will be used as the latest version.

Alternatively, a plugin can be used to override this (e.g. to manage a VERSION or composer.json file):

Add the --release-version flag to print the next version without releasing anything.

Git

Git projects are supported well by release-it, automating the tasks to stage, commit, tag and push releases to any Git remote.

→ See Git for more details.

GitHub Releases

GitHub projects can have releases attached to Git tags, containing release notes and assets. There are two ways to add GitHub releases in your release-it flow:

  1. Automated (requires a GITHUB_TOKEN)
  2. Manual (using the GitHub web interface with pre-populated fields)

→ See GitHub Releases for more details.

GitLab Releases

GitLab projects can have releases attached to Git tags, containing release notes and assets. To automate GitLab releases:

→ See GitLab Releases for more details.

Changelog

By default, release-it generates a changelog, to show and help select a version for the new release. Additionally, this changelog serves as the release notes for the GitHub or GitLab release.

The default command is based on git log .... This setting (git.changelog) can be overridden. To further customize the release notes for the GitHub or GitLab release, there's github.releaseNotes or gitlab.releaseNotes. Make sure any of these commands output the changelog to stdout. Plugins are available for:

  • GitHub and GitLab Releases
  • auto-changelog
  • Conventional Changelog
  • Keep A Changelog

→ See Changelog for more details.

Publish to npm

With a package.json in the current directory, release-it will let npm bump the version in package.json (and package-lock.json if present), and publish to the npm registry.

→ See Publish to npm for more details.

Manage pre-releases

With release-it, it's easy to create pre-releases: a version of your software that you want to make available, while it's not in the stable semver range yet. Often "alpha", "beta", and "rc" (release candidate) are used as identifier for pre-releases. An example pre-release version is 2.0.0-beta.0.

→ See Manage pre-releases for more details.

Update or re-run existing releases

Use --no-increment to not increment the last version, but update the last existing tag/version.

This may be helpful in cases where the version was already incremented. Here's a few example scenarios:

  • To update or publish a (draft) GitHub Release for an existing Git tag.
  • Publishing to npm succeeded, but pushing the Git tag to the remote failed. Then use release-it --no-increment --no-npm to skip the npm publish and try pushing the same Git tag again.

Hooks

Use script hooks to run shell commands at any moment during the release process (such as before:init or after:release).

The format is [prefix]:[hook] or [prefix]:[plugin]:[hook]:

partvalue
prefixbefore or after
pluginversion, git, npm, github, gitlab
hookinit, bump, release

Use the optional :plugin part in the middle to hook into a life cycle method exactly before or after any plugin.

The core plugins include version, git, npm, github, gitlab.

Note that hooks like after:git:release will not run when either the git push failed, or when it is configured not to be executed (e.g. git.push: false). See execution order for more details on execution order of plugin lifecycle methods.

All commands can use configuration variables (like template strings). An array of commands can also be provided, they will run one after another. Some example release-it configuration:

{
  "hooks": {
    "before:init": ["npm run lint", "npm test"],
    "after:my-plugin:bump": "./bin/my-script.sh",
    "after:bump": "npm run build",
    "after:git:release": "echo After git push, before github release",
    "after:release": "echo Successfully released ${name} v${version} to ${repo.repository}."
  }
}

The variables can be found in the default configuration. Additionally, the following variables are exposed:

version
latestVersion
changelog
name
repo.remote, repo.protocol, repo.host, repo.owner, repo.repository, repo.project

All variables are available in all hooks. The only exception is that the additional variables listed above are not yet available in the init hook.

Use --verbose to log the output of the commands.

For the sake of verbosity, the full list of hooks is actually: init, beforeBump, bump, beforeRelease, release or afterRelease. However, hooks like before:beforeRelease look weird and are usually not useful in practice.

Plugins

Since v11, release-it can be extended in many, many ways. Here are some plugins:

PluginDescription
@release-it/bumperRead & write the version from/to any file
@release-it/conventional-changelogProvides recommended bump, conventional-changelog, and updates CHANGELOG.md
@release-it/keep-a-changelogMaintain CHANGELOG.md using the Keep a Changelog standards
release-it-lerna-changelogIntegrates lerna-changelog into the release-it pipeline
release-it-yarn-workspacesReleases each of your projects configured workspaces
release-it-calver-pluginEnables Calendar Versioning (calver) with release-it
@grupoboticario/news-fragmentsAn easy way to generate your changelog file
@j-ulrich/release-it-regex-bumperRegular expression based version read/write plugin for release-it

Internally, release-it uses its own plugin architecture (for Git, GitHub, GitLab, npm).

→ See all release-it plugins on npm.

→ See plugins for documentation to write plugins.

Distribution repository

Deprecated. Please see distribution repository for more details.

Metrics

Use --disable-metrics to opt-out of sending some anonymous statistical data to Google Analytics. For details, refer to lib/metrics.js. Please consider to not opt-out: more data means more support for future development.

Troubleshooting & debugging

  • With release-it --verbose (or -V), release-it prints the output of every user-defined hook.
  • With release-it -VV, release-it also prints the output of every internal command.
  • Use DEBUG=release-it:* release-it [...] to print configuration and more error details.

Use verbose: 2 in a configuration file to have the equivalent of -VV on the command line.

Use release-it programmatically

While mostly used as a CLI tool, release-it can be used as a dependency to integrate in your own scripts. See use release-it programmatically for example code.

Example projects using release-it

Resources

Links

Author: Release-it
Source Code: https://github.com/release-it/release-it 
License: MIT License

#git #github #cli #hook #automated 

Release-it: Automate Versioning and Package Publishing

23 Python Automation Ideas for Business Owners

https://www.blog.duomly.com/python-automation-ideas/

Python is a versatile language that can be used for various automation tasks. You can use Python for automating file or folder management, generating reports from data stored in a database, monitoring logs on your servers, creating website scrapers, among other things. 

If you’re looking for ways to automate tasks with Python, be sure to read the article above for some ideas.

#python #Python #automation #automated #businesses #business #startup #startups 

23 Python Automation Ideas for Business Owners

How to Switch From Manual Tester To Automation?

How to Shift from Manual to Automated Testing

  1. Get Buy-In & Change Minds. ...
  2. Decide What to Automate & Who Will Do It. ...
  3. Explore Frameworks. ...
  4. Pick Tools. ...
  5. Start Small, Fail Small, Learn Fast. ...
  6. Strive for Continuous Clarity. ...
  7. Make Automation Work Now, Next Quarter, Next Year. ...
  8. See Test Automation in Action.

#automated #testing 

How to Switch From Manual Tester To Automation?
Mike  Kozey

Mike Kozey

1642726860

PHP_CodeSniffer tokenizes PHP Files and Detects Violations

About

PHP_CodeSniffer is a set of two PHP scripts; the main phpcs script that tokenizes PHP, JavaScript and CSS files to detect violations of a defined coding standard, and a second phpcbf script to automatically correct coding standard violations. PHP_CodeSniffer is an essential development tool that ensures your code remains clean and consistent.

Requirements

PHP_CodeSniffer requires PHP version 5.4.0 or greater, although individual sniffs may have additional requirements such as external applications and scripts. See the Configuration Options manual page for a list of these requirements.

If you're using PHP_CodeSniffer as part of a team, or you're running it on a CI server, you may want to configure your project's settings using a configuration file.

Installation

The easiest way to get started with PHP_CodeSniffer is to download the Phar files for each of the commands:

# Download using curl
curl -OL https://squizlabs.github.io/PHP_CodeSniffer/phpcs.phar
curl -OL https://squizlabs.github.io/PHP_CodeSniffer/phpcbf.phar

# Or download using wget
wget https://squizlabs.github.io/PHP_CodeSniffer/phpcs.phar
wget https://squizlabs.github.io/PHP_CodeSniffer/phpcbf.phar

# Then test the downloaded PHARs
php phpcs.phar -h
php phpcbf.phar -h

Composer

If you use Composer, you can install PHP_CodeSniffer system-wide with the following command:

composer global require "squizlabs/php_codesniffer=*"

Make sure you have the composer bin dir in your PATH. The default value is ~/.composer/vendor/bin/, but you can check the value that you need to use by running composer global config bin-dir --absolute.

Or alternatively, include a dependency for squizlabs/php_codesniffer in your composer.json file. For example:

{
    "require-dev": {
        "squizlabs/php_codesniffer": "3.*"
    }
}

You will then be able to run PHP_CodeSniffer from the vendor bin directory:

./vendor/bin/phpcs -h
./vendor/bin/phpcbf -h

Phive

If you use Phive, you can install PHP_CodeSniffer as a project tool using the following commands:

phive install phpcs
phive install phpcbf

You will then be able to run PHP_CodeSniffer from the tools directory:

./tools/phpcs -h
./tools/phpcbf -h

PEAR

If you use PEAR, you can install PHP_CodeSniffer using the PEAR installer. This will make the phpcs and phpcbf commands immediately available for use. To install PHP_CodeSniffer using the PEAR installer, first ensure you have installed PEAR and then run the following command:

pear install PHP_CodeSniffer

Git Clone

You can also download the PHP_CodeSniffer source and run the phpcs and phpcbf commands directly from the Git clone:

git clone https://github.com/squizlabs/PHP_CodeSniffer.git
cd PHP_CodeSniffer
php bin/phpcs -h
php bin/phpcbf -h

Getting Started

The default coding standard used by PHP_CodeSniffer is the PEAR coding standard. To check a file against the PEAR coding standard, simply specify the file's location:

$ phpcs /path/to/code/myfile.php

Or if you wish to check an entire directory you can specify the directory location instead of a file.

$ phpcs /path/to/code-directory

If you wish to check your code against the PSR-12 coding standard, use the --standard command line argument:

$ phpcs --standard=PSR12 /path/to/code-directory

If PHP_CodeSniffer finds any coding standard errors, a report will be shown after running the command.

Full usage information and example reports are available on the usage page.

Documentation

The documentation for PHP_CodeSniffer is available on the Github wiki.

Issues

Bug reports and feature requests can be submitted on the Github Issue Tracker.

Contributing

See CONTRIBUTING.md for information.

Versioning

PHP_CodeSniffer uses a MAJOR.MINOR.PATCH version number format.

The MAJOR version is incremented when:

  • backwards-incompatible changes are made to how the phpcs or phpcbf commands are used, or
  • backwards-incompatible changes are made to the ruleset.xml format, or
  • backwards-incompatible changes are made to the API used by sniff developers, or
  • custom PHP_CodeSniffer token types are removed, or
  • existing sniffs are removed from PHP_CodeSniffer entirely

The MINOR version is incremented when:

  • new backwards-compatible features are added to the phpcs and phpcbf commands, or
  • backwards-compatible changes are made to the ruleset.xml format, or
  • backwards-compatible changes are made to the API used by sniff developers, or
  • new sniffs are added to an included standard, or
  • existing sniffs are removed from an included standard

NOTE: Backwards-compatible changes to the API used by sniff developers will allow an existing sniff to continue running without producing fatal errors but may not result in the sniff reporting the same errors as it did previously without changes being required.

The PATCH version is incremented when:

  • backwards-compatible bug fixes are made

NOTE: As PHP_CodeSniffer exists to report and fix issues, most bugs are the result of coding standard errors being incorrectly reported or coding standard errors not being reported when they should be. This means that the messages produced by PHP_CodeSniffer, and the fixes it makes, are likely to be different between PATCH versions.

Author: Squizlabs
Source Code: https://github.com/squizlabs/PHP_CodeSniffer 
License: BSD-3-Clause License

#cli #php #automated 

PHP_CodeSniffer tokenizes PHP Files and Detects Violations
Dexter  Goodwin

Dexter Goodwin

1642073280

AtsPy: Automated Time Series Models in Python

Automated Time Series Models in Python (AtsPy)

Finance Quant Machine Learning


SSRN Report

Easily develop state of the art time series models to forecast univariate data series. Simply load your data and select which models you want to test. This is the largest repository of automated structural and machine learning time series models. Please get in contact if you want to contribute a model. This is a fledgling project, all advice appreciated.

Install

pip install atspy

Automated Models

  1. ARIMA - Automated ARIMA Modelling
  2. Prophet - Modeling Multiple Seasonality With Linear or Non-linear Growth
  3. HWAAS - Exponential Smoothing With Additive Trend and Additive Seasonality
  4. HWAMS - Exponential Smoothing with Additive Trend and Multiplicative Seasonality
  5. NBEATS - Neural basis expansion analysis (now fixed at 20 Epochs)
  6. Gluonts - RNN-based Model (now fixed at 20 Epochs)
  7. TATS - Seasonal and Trend no Box Cox
  8. TBAT - Trend and Box Cox
  9. TBATS1 - Trend, Seasonal (one), and Box Cox
  10. TBATP1 - TBATS1 but Seasonal Inference is Hardcoded by Periodicity
  11. TBATS2 - TBATS1 With Two Seasonal Periods

Why AtsPy?

  1. Implements all your favourite automated time series models in a unified manner by simply running AutomatedModel(df).
  2. Reduce structural model errors with 30%-50% by using LightGBM with TSFresh infused features.
  3. Automatically identify the seasonalities in your data using singular spectrum analysis, periodograms, and peak analysis.
  4. Identifies and makes accessible the best model for your time series using in-sample validation methods.
  5. Combines the predictions of all these models in a simple (average) and complex (GBM) ensembles for improved performance.
  6. Where appropriate models have been developed to use GPU resources to speed up the automation process.
  7. Easily access all the models by using am.models_dict_in for in-sample and am.models_dict_out for out-of-sample prediction.

AtsPy Progress

  1. Univariate forecasting only (single column) and only monthly and daily data have been tested for suitability.
  2. More work ahead; all suggestions and criticisms appreciated, use the issues tab.
  3. Here is a Google Colab to run the package in the cloud and here you can run all the models.

Documentation by Example


Load Package

from atspy import AutomatedModel

Pandas DataFrame

The data requires strict preprocessing, no periods can be skipped and there cannot be any empty values.

import pandas as pd
df = pd.read_csv("https://raw.githubusercontent.com/firmai/random-assets-two/master/ts/monthly-beer-australia.csv")
df.Month = pd.to_datetime(df.Month)
df = df.set_index("Month"); df
 Megaliters
Month 
1956-01-0193.2
1956-02-0196.0
1956-03-0195.2
1956-04-0177.1
1956-05-0170.9

AutomatedModel

  1. AutomatedModel - Returns a class instance.
  2. forecast_insample - Returns an in-sample forcasted dataframe and performance.
  3. forecast_outsample - Returns an out-of-sample forcasted dataframe.
  4. ensemble - Returns the results of three different forms of ensembles.
  5. models_dict_in - Returns a dictionary of the fully trained in-sample models.
  6. models_dict_out - Returns a dictionary of the fully trained out-of-sample models.
from atspy import AutomatedModel
model_list = ["HWAMS","HWAAS","TBAT"]
am = AutomatedModel(df = df , model_list=model_list,forecast_len=20 )

Other models to try, add as many as you like; note ARIMA is slow: ["ARIMA","Gluonts","Prophet","NBEATS", "TATS", "TBATS1", "TBATP1", "TBATS2"]

In-Sample Performance

forecast_in, performance = am.forecast_insample(); forecast_in
 TargetHWAMSHWAASTBAT
Date    
1985-10-01181.6161.962148162.391653148.410071
1985-11-01182.0174.688055173.191756147.999237
1985-12-01190.0189.728744187.649575147.589541
1986-01-01161.2155.077205154.817215147.180980
1986-02-01155.5148.054292147.477692146.773549
performance
 TargetHWAMSHWAASTBAT
rmse0.00000017.59940018.99382736.538009
mse0.000000309.738878360.7654521335.026136
mean155.293277142.399639140.577496126.590412

Out-of-Sample Forecast

forecast_out = am.forecast_outsample(); forecast_out
 HWAMSHWAASTBAT
Date   
1995-09-01137.518755137.133938142.906275
1995-10-01164.136220165.079612142.865575
1995-11-01178.671684180.009560142.827110
1995-12-01184.175954185.715043142.790757
1996-01-01147.166448147.440026142.756399

Ensemble and Model Validation Performance

all_ensemble_in, all_ensemble_out, all_performance = am.ensemble(forecast_in, forecast_out)
all_performance
 rmsemsemean
ensemble_lgb__X__HWAMS9.69758894.043213146.719412
ensemble_lgb__X__HWAMS__X__HWAMS_HWAAS__X__ensemble_ts__X__HWAAS9.87521297.519817145.250837
ensemble_lgb__X__HWAMS__X__HWAMS_HWAAS11.127326123.817378142.994374
ensemble_lgb12.748526162.524907156.487208
ensemble_lgb__X__HWAMS__X__HWAMS_HWAAS__X__ensemble_ts__X__HWAAS__X__HWAMS_HWAAS_TBAT__X__TBAT14.589155212.843442138.615567
HWAMS15.567905242.359663136.951615
HWAMS_HWAAS16.651370277.268110135.544299
ensemble_ts17.255107297.738716163.134079
HWAAS17.804066316.984751134.136983
HWAMS_HWAAS_TBAT23.358758545.631579128.785846
TBAT39.0038641521.301380115.268940

Best Performing In-sample

all_ensemble_in[["Target","ensemble_lgb__X__HWAMS","HWAMS","HWAAS"]].plot()

png

Future Predictions All Models

all_ensemble_out[["ensemble_lgb__X__HWAMS","HWAMS","HWAAS"]].plot()

png

And Finally Grab the Models

am.models_dict_in
{'HWAAS': <statsmodels.tsa.holtwinters.HoltWintersResultsWrapper at 0x7f42f7822d30>,
 'HWAMS': <statsmodels.tsa.holtwinters.HoltWintersResultsWrapper at 0x7f42f77fff60>,
 'TBAT': <tbats.tbats.Model.Model at 0x7f42d3aab048>}
am.models_dict_out
{'HWAAS': <statsmodels.tsa.holtwinters.HoltWintersResultsWrapper at 0x7f9c01309278>,
 'HWAMS': <statsmodels.tsa.holtwinters.HoltWintersResultsWrapper at 0x7f9c01309cf8>,
 'TBAT': <tbats.tbats.Model.Model at 0x7f9c08f18ba8>}

Follow this link if you want to run the package in the cloud.

AtsPy Future Development

  1. Additional in-sample validation steps to stop deep learning models from over and underfitting.
  2. Extra performance metrics like MAPE and MAE.
  3. Improved methods to select the window length to use in training and calibrating the model.
  4. Add the ability to accept dirty data, and have the ability to clean it up, interpolation etc.
  5. Add a function to resample to a larger frequency for big datasets.
  6. Add the ability to algorithmically select a good enough chunk of a large dataset to balance performance and time to train.
  7. More internal model optimisation using AIC, BIC an AICC.
  8. Code annotations for other developers to follow and improve on the work being done.
  9. Force seasonality stability between in and out of sample training models.
  10. Make AtsPy less dependency heavy, currently it draws on tensorflow, pytorch and mxnet.

Citations

If you use AtsPy in your research, please consider citing it. I have also written a small report that can be found on SSRN.

BibTeX entry:

@software{atspy,
  title = {{AtsPy}: Automated Time Series Models in Python.},
  author = {Snow, Derek},
  url = {https://github.com/firmai/atspy/},
  version = {1.15},
  date = {2020-02-17},
}
@misc{atspy,
  author = {Snow, Derek},
  title = {{AtsPy}: Automated Time Series Models in Python (1.15).},
  year  = {2020},
  url   = {https://github.com/firmai/atspy/},
}

Author: Firmai
Source Code: https://github.com/firmai/atspy 

#python #time #automated 

AtsPy: Automated Time Series Models in Python
Aurelio  Yost

Aurelio Yost

1639222860

How To Perform Geolocation Testing Using xUnit: Part VII

In this video, you will learn how to perform Geolocation testing using xUnit. 

It is Part VII of the LambdaTest xUnit Tutorial series. In this video, Anton Angelov (@angelovstanton) explains Geolocation testing using xUnit with practical implementation. If you build consumer web products for different audiences, geolocation testing becomes necessary because a web application or a website may behave differently if viewed from different locations.
Geolocation browser testing helps you give a uniform experience to users irrespective of their location.

This video answers 🚩 
◼ How do you do geolocation?
◼ How do you test geolocation in Chrome?
◼ How do you test for geofencing?

Vɪᴅᴇᴏ Cʜᴀᴘᴛᴇʀꜱ 🔰
00:00 - Introduction  
01:00 - Session starts
02:50 - What is Geolocation testing?
04:42 - Performing Geolocation testing on LambdaTest platform overcloud
55:10 - Conclusion of the session

Start FREE testing -: https://accounts.lambdatest.com/register?utm_source=YouTube&utm_medium=YTChannel&utm_campaign=Video&utm_term=gOgAQfYYcqk

#xunit #selenium  #automated 

How To Perform Geolocation Testing Using xUnit: Part VII
Aurelio  Yost

Aurelio Yost

1639163400

Getting Started with Mocha: Part IV

This video explains how to write and run your first test cases in mocha. 
It is Part IV of the JavaScript Test Automation LambdaTest Tutorial series. In this video, Ryan Howard (@ryantestsstuff), an engineer, explains how we can use mocha JS, and run tests in mocha. You will also gain insights into how mocha testing works?  

This video answers 🚩 

◼ How do you write test cases in mocha?
◼ How do I run a specific test in mocha?
◼ How do you test a mocha function?
◼ How do you write unit test cases with mocha?
◼ How do you assert in mocha?
◼ Do mocha tests run in order?

Vɪᴅᴇᴏ Cʜᴀᴘᴛᴇʀꜱ 🔰

➤ 00:00 Introduction  
➤ 01:03 About Mocha Test Framework
➤ 03:19 How to add mocha to your test?
➤ 09:58 Run your first test using Mocha

Learn more-: https://accounts.lambdatest.com/register?utm_source=YouTube&utm_medium=YTChannel&utm_campaign=Video&utm_term=hUDQOcabs0Y

#selenium  #mocha  #webdriver  #javascript  #automated 

Getting Started with Mocha: Part IV
Aurelio  Yost

Aurelio Yost

1639156020

Parameterized Tests In xUnit Selenium C# Part IV

In this video, you will learn how to write parameterized tests in xUnit Selenium C#.

It is Part IV of the LambdaTest xUnit.NET core tutorial series. In this video, Anton Angelov (@angelovstanton) explains the use of xUnit Selenium C Sharp with the help of examples showcasing how to write parameterized tests in xUnit Selenium C#. 
This video answers 🚩

◼ What is the use of xUnit?
◼ How do you write xUnit test cases?
◼ Can I use xUnit for .NET framework?

Vɪᴅᴇᴏ Cʜᴀᴘᴛᴇʀꜱ 🔰
00:00 - Introduction 
00:58 - xUnit tutorial using Selenium C# begins
01:02 - Course modules
01:42 - About parameterized tests in xUnit using Selenium
03:15 - Practical begins - writing tests in xUnit Selenium C#  
27:07 - Conclusion of the session

Start FREE testing -: https://accounts.lambdatest.com/register?utm_source=v%3DTkybFNn7GLY&utm_medium=YTChannel&utm_campaign=Video

#selenium  #xunit  #csharp  #software  #automated 

Parameterized Tests In xUnit Selenium C# Part IV
Aurelio  Yost

Aurelio Yost

1639148644

What is Assertion in Selenium? Part III

In this video, learn what is Assertion in Selenium JavaScript? How and when do we use them? 

It is Part III of the LambdaTest JavaScript Test Automation Tutorial series. In this video, Ryan Howard (@ryantestsstuff), an engineer, explains 'assertions, their types, and their applicability in detail showcasing practically, what if the test(s) fails while performing selenium automation testing? This video answers how one can handle major errors/issues using 'Assertions'.

Vɪᴅᴇᴏ Cʜᴀᴘᴛᴇʀꜱ 👇

➤ 00:00 Introduction to JavaScript Testing Tutorial for beginners
➤ 01:00 What are Assertions in Selenium JavaScript
➤ 02:05 How to use Assertions in Selenium JavaScript using Node Assertion Library
➤ 12:29 How to use Assertions in Selenium JavaScript using Chai

Video also answers 🚩 
------------💨
What are the different methods of assert?
How to 'assert' and 'verify' in JavaScript Selenium.
How do you do Assertion in Selenium? 
How do you use assertions in Testing?
What are the different assertions in Testing?
What are the different methods of assert?
What is the use of Assertion in Selenium?

Know More, Visit: https://accounts.lambdatest.com/register?utm_source=YouTube&utm_medium=YTChannel&utm_campaign=Video&utm_term=JQGETyIx_O4

#selenium  #javascript  #automated 

What is Assertion in Selenium? Part III
Aurelio  Yost

Aurelio Yost

1639133820

How to Write and Run Test Scripts in Selenium? Part II

This video will explain how to write the first Selenium Test in JavaScript. Explore more-: https://accounts.lambdatest.com/register?utm_source=YouTube&utm_medium=YouTubeChannel&utm_campaign=Videos&utm_term=w4cidssAdJg

This is Part II of the JavaScript Test Automation LambdaTest Tutorial series. Ryan Howard (@ryantestsstuff), a seasoned expert in Selenium & JavaScript, explains how to create & execute the test cases on the local Selenium Grid. He also demonstrates how to use Selenium Web Locators to find the required WebElement on the page and perform actions on the same.

This video answers 🌐-: 
💠How do you write the first test case in Selenium with JavaScript?
💠Can I use Selenium for JavaScript?
💠How do I start testing with Selenium with JavaScript?
💠How do I run a test script in Selenium?

Vɪᴅᴇᴏ Cᴏɴᴛᴇɴᴛ Cʜᴀᴘᴛᴇʀꜱ 🔰
----------------------◾
00:00 - Introduction to JavaScript Testing Tutorial for beginners
01:19 - Selecting an IDE to write your first Selenium test in JavaScript
01:50 - Writing your first Selenium test in JavaScript
10:48 - How to use Selenium Web Locators when writing tests in JavaScript
16:30 - How to run your first test implemented in Selenium JavaScript
18:05 - How to write and execute your first test in Selenium JavaScript in 2 minutes

In the above Test Automation In JavaScript Tutorial Series, Ryan Howard explains the fundamentals of JavaScript and its role in Selenium Test Automation, with practical examples.  It will cover everything from getting set up to building test cases on Selenium Webdriver using JavaScript. Also, using Lambda Test Platform potentials running tests across multiple browsers /versions over the cloud.
----------------------◾
GitHub repo for JavaScript Selenium Automation Testing: https://github.com/LambdaTest/javascript

#javascript  #selenium  #automated  #ide 

How to Write and Run Test Scripts in Selenium? Part II
Holden  Zemlak

Holden Zemlak

1637202480

Types of Automation Framework that You Should Know as QA: Session 14

In this video, we are going to cover the types of automation framework in selenium.

We are discussing,  Types of Automated Testing Frameworks

✅Modular Based Testing Framework.
✅Data-Driven Framework.
✅Keyword-Driven Framework.
✅Hybrid Testing Framework.

✅ What is Data Driven Testing?
Data Driven Testing is a software testing method in which test data is stored in table or spreadsheet format. Data driven testing allows testers to input a single test script that can execute tests for all test data from a table and expect the test output in the same table

✅What is Keyword Driven Framework?
Keyword Driven Framework is a functional automation testing framework that divides test cases into four different parts in order to separate coding from test cases and test steps for better automation.
#automated  #testautomation #selenium 

Types of Automation Framework that You Should Know as QA: Session 14