1676509620
AnyVal
Implicit classes are a feature that allow you to extend the functionality of existing types by adding new methods. They are defined using the implicit
keyword and have a single constructor parameter. We can use these class as if they belong to the original type without having to perform explicit type conversion.
Implicit classes are particularly useful for adding utility methods to existing types. They allow you to do this without creating a new type or modifying the original type. You can also use implicit classes to add implicit conversions. It can be helpful in making your code more concise and readable.
Implicit classes in Scala do not have to extend AnyVal
. They can extend any type that is a subtype of Any
. However, if the implicit class is meant to be used as a value type and is simple enough, it may make sense to extend AnyVal
to allow for optimized storage and improved performance.
It’s worth noting that an implicit class that extends AnyVal
can only have a single constructor parameter and is subject to certain restrictions in terms of its functionality, as it is meant to represent a value type. On the other hand, implicit classes that do not extend AnyVal
are treated as normal classes and can have multiple constructor parameters, additional fields, and more complex logic.
So whether or not an implicit class should extend AnyVal
depends on the specific use case and the intended behavior of the class.
Here’s an example to illustrate the difference between implicit classes that extend AnyVal
and those that do not.
Let’s say we want to add a method to the Int
type that squares its value. We can define an implicit class that takes an Int
value and adds this method:
implicit class IntOps(val x: Int) extends AnyVal {
def square: Int = x * x
}
In this case, the implicit class extends AnyVal
, so it is optimized for use as a value type. We can use this implicit class like this:
scala> 5.square
res0: Int = 25
Now, let’s say we want to add a similar method to the String
type that repeats its value a specified number of times. To do this, we can define an implicit class that takes a String
value and adds this method:
implicit class StringOps(val s: String) {
def repeat(n: Int): String = s * n
}
In this case, the implicit class does not extend AnyVal
, because it is not meant to be used as a value type. We can treat it as a normal class. We can use this implicit class like this:
scala> "Hello ".repeat(3)
res1: String = Hello Hello Hello
So, in this example, the implicit class that extends AnyVal
is more optimized for performance as a value type, while the implicit class that does not extend AnyVal
is treated as a normal class and can handle more complex logic.
Here’s one more example. Let’s say we want to add a method to the Int
type that calculates the factorial of a number. We can define an implicit class that takes an Int
value and adds this method:
implicit class IntFactorial(val n: Int) {
def factorial: Int = {
def fact(x: Int, acc: Int): Int =
if (x <= 1) acc else fact(x - 1, acc * x)
fact(n, 1)
}
}
In this case, the implicit class does not extend AnyVal
, because it needs to perform a recursive calculation, which is not possible with value types. We can use this implicit class like this:
cala> 5.factorial
res2: Int = 120
So, in this example, we see that extending AnyVal
is not always the right choice, as the more complex logic required by the factorial
method makes it more appropriate to use a normal class that does not extend AnyVal
.
In conclusion, whether or not an implicit class in Scala should extend AnyVal
depends on the intended use case and behavior of the class. If the implicit class is meant to be used as a simple value type, then extending AnyVal
can result in improved performance and optimized storage. On the other hand, if the implicit class requires more complex logic or additional fields, it may make more sense to treat it as a normal class and not extend AnyVal
. In either case, implicit classes can be a convenient way to add new methods to existing types in Scala, and the choice of whether to extend AnyVal
or not should be based on the specific requirements of each case.
Original article source at: https://blog.knoldus.com/
1676290140
Implicit classes are a feature that allow you to extend the functionality of existing types by adding new methods. They are defined using the implicit
keyword and have a single constructor parameter. We can use these class as if they belong to the original type without having to perform explicit type conversion.
Implicit classes are particularly useful for adding utility methods to existing types. They allow you to do this without creating a new type or modifying the original type. You can also use implicit classes to add implicit conversions. It can be helpful in making your code more concise and readable.
Implicit classes in Scala do not have to extend AnyVal
. They can extend any type that is a subtype of Any
. However, if the implicit class is meant to be used as a value type and is simple enough, it may make sense to extend AnyVal
to allow for optimized storage and improved performance.
It’s worth noting that an implicit class that extends AnyVal
can only have a single constructor parameter and is subject to certain restrictions in terms of its functionality, as it is meant to represent a value type. On the other hand, implicit classes that do not extend AnyVal
are treated as normal classes and can have multiple constructor parameters, additional fields, and more complex logic.
So whether or not an implicit class should extend AnyVal
depends on the specific use case and the intended behavior of the class.
Here’s an example to illustrate the difference between implicit classes that extend AnyVal
and those that do not.
Let’s say we want to add a method to the Int
type that squares its value. We can define an implicit class that takes an Int
value and adds this method:
implicit class IntOps(val x: Int) extends AnyVal {
def square: Int = x * x
}
In this case, the implicit class extends AnyVal
, so it is optimized for use as a value type. We can use this implicit class like this:
scala> 5.square
res0: Int = 25
Now, let’s say we want to add a similar method to the String
type that repeats its value a specified number of times. To do this, we can define an implicit class that takes a String
value and adds this method:
implicit class StringOps(val s: String) {
def repeat(n: Int): String = s * n
}
In this case, the implicit class does not extend AnyVal
, because it is not meant to be used as a value type. We can treat it as a normal class. We can use this implicit class like this:
scala> "Hello ".repeat(3)
res1: String = Hello Hello Hello
So, in this example, the implicit class that extends AnyVal
is more optimized for performance as a value type, while the implicit class that does not extend AnyVal
is treated as a normal class and can handle more complex logic.
Here’s one more example. Let’s say we want to add a method to the Int
type that calculates the factorial of a number. We can define an implicit class that takes an Int
value and adds this method:
implicit class IntFactorial(val n: Int) {
def factorial: Int = {
def fact(x: Int, acc: Int): Int =
if (x <= 1) acc else fact(x - 1, acc * x)
fact(n, 1)
}
}
In this case, the implicit class does not extend AnyVal
, because it needs to perform a recursive calculation, which is not possible with value types. We can use this implicit class like this:
scala> 5.factorial
res2: Int = 120
So, in this example, we see that extending AnyVal
is not always the right choice, as the more complex logic required by the factorial
method makes it more appropriate to use a normal class that does not extend AnyVal
.
In conclusion, whether or not an implicit class in Scala should extend AnyVal
depends on the intended use case and behavior of the class. If the implicit class is meant to be used as a simple value type, then extending AnyVal
can result in improved performance and optimized storage. On the other hand, if the implicit class requires more complex logic or additional fields, it may make more sense to treat it as a normal class and not extend AnyVal
. In either case, implicit classes can be a convenient way to add new methods to existing types in Scala, and the choice of whether to extend AnyVal
or not should be based on the specific requirements of each case.
Original article source at: https://blog.knoldus.com/
1671798840
If you’ve been thinking about starting your online business, giving classes might’ve come to mind. As we are all well aware, the online world has a space for everyone and all types of content.
But a space for audience and content is not the same as spaces for type of classes. The latter refers to the format, not the content.
In today’s article, we will mention 7 of the best types/formats for your lessons. We will be broad and include various types, with a few examples.
Let’s start.
Let’s start with a famous one, video lessons. They have been used for years for a few reasons. First, multimedia content is liked by most people. Second, you don’t need to set up any advanced technological platform or anything. They resemble normal lectures or presentations in most cases, which make them easier to make and digest for most people.
Of course, many like to add graphs, videos, and others to spice things up, but you can show them as a normal lecture.
Besides, people working close to the arts or the body will find this type very useful.
Now, as a short example, imagine you got a course about making cookies. You got 10 tips and tricks as well as recipes. You then just prepare your script (could be short) and record yourself talking about it/showing it. That is pretty much it!
There are many reasons why you would opt for this type of course. One major one is learning only from video, especially for technical skills, could be challenging. Besides, interactive courses keep your audience learning and engaged. You don’t want them about to fall asleep in the middle of your videos!
Also, this method of learning does not need you to personally grade their progress as if you were a teacher. You can set up a system of multiple-choice answers, and let technology do their job!
One great example of an online course is inarguably Duomly. It offers a broad range of courses that are well structured from top to bottom. Each section is divided into small parts, that test your learning as every new important concept is introduced.
Besides, it is suitable for people with short spare time since they can progress as they see fit. There is no need to expend 1h watching a whole lecture!
You may rightly ask then, “why this one if interactive courses are so great?” Well, one reason is that some people are simply more old school. Traditional good old texts are what they look for. Other is that some topics may be more broad or complex. This makes shortening their explanations or dividing them into pieces, counterproductive. You either create room for confusion or unnecessarily prolonged the course by doing this. None of them are recommended.
Another reason would be that simply you got several topics to teach about, but they don’t share a lot in common. In an interactive course, those that are not very related won’t be mentioned. But in the case of written courses, you can definitely add the topic to your platform.
Besides, written texts have this special vibe and connection with the audience that none of the 2 above have.
Duomly, again has a great example of this, Duomly Blog. In it, you will find well-written, concise but well-elaborated pieces that will teach you a whole range of things. It also counts with step-by-step explanations and all types of visual guides like screenshots. All of this and more form part of an overall admirable experience.
Like video courses, webinars are accessible and very famous. One particularity is the option to interact with members of the panel, and with the audience via Q&As. Besides, while it is less flexible than the other ones because most likely you will be showing a ppt, it is friendly for all types of audiences. This is because we, regardless of age/education level, are familiar with presentations. Also, because the format is stricter, you will find it less time-consuming. There won’t be a need to post videos, let alone interactive sections anywhere. But, of course, a solid preparation is needed, so it is not necessarily easier.
We are all familiar with emails of all kinds, but their use for courses is largely unexplored.
Emailing is great at this and works like written online courses do. Plus, they offer similar advantages and benefits.
There are a few differences though. For example, they will tend to be shorter. Also, the way of addressing your audience is more personal, you are literally appearing in their inbox. This is different from the normal dynamic of them somehow entering your site.
Lastly, it is also important to note that these types of courses won’t be for everyone. Not only because it works for some type of content better than others. Imagine teaching how to play the guitar by using this method. But also, because getting the email listing is a different dynamic.
One of the favorites of those that are learning practical skills. One good aspect of this type is that you got full liberty to structure the class. So, you can ensure a great model considering your and your student’s goals/preferences.
This type can also be used for other types of skills as well, but with certain limitations. For example, say that you are teaching something to someone completely new. No matter how much you both try, there is a learning process in there. You can accelerate it, but don’t delete it altogether, meaning there will be some waiting time.
This leads us to consider these types of classes mostly for specialized students. For example, one great idea would be to have online courses that put everyone from your audience at a certain knowledge level. And then, use 1-1 training to maximize time and overall classes effectiveness.
This type of course is for those that want to get the most out of classes ASAP. The idea is to be short and intense; we are talking a few hours per day/week. It is not pushing it, but only adding the right amount of pressure to maximize learning. Bare that in mind!
Also, remember our previous point about the learning curve. Say, your teaching quantum physics, in 2 weeks you won’t cover a whole semester. Because almost no one will handle that without getting overwhelmed and burn out.
Lastly, boot camps are great because they are flexible as well. Flexible not when they start but after. You can use everything from books to videos, graphs, 1-1 talks, lectures, etc. And schedule the plan as you see fit. But then, you stick to the schedule for good, something that 1-1 classes can change.
There are many ways to start teaching your audience about your interest and areas of expertise. Some work best for a particular subject or audience. While they can all be used in tandem by most entrepreneurs. Deciding on which to start first, after knowing more about these methods, will be much easier.
Thanks for reading this article, we hope it has been useful, and as always, we wish you the best of luck.
Original article source at: https://www.blog.duomly.com/
1671539549
A Pseudo-class in CSS defines the special state of an element. It is a keyword that we use after the selector to apply the style based on the state of the element.
For example, the active pseudo-class can be used on an anchor tag (<a>) to add extra styling if the link is active
So, CSS Pseudo-class helps you apply styles based on the state of the content ( like active when a link is active), the mouse events (like hover when the user hovers over an element), and many more.
The CSS Pseudo classes follow the below syntax:
selector:pseudo-class {
property: value;
}
This pseudo-class is used to add styling when a link is already been visited by the user. In this below example we will see how the color and background of the link change when it is visited
This pseudo-class is used to add special effects and design when the user hovers over an element i.e. when a user places a mouse pointer over the element. We generally use the :hover pseudo-class with buttons and links to highlight them on hover. In the below example we will see how a button changes its color and background on hover.
This pseudo-class is used to select an element when it receives focus i.e. when a user clicks on it. The :focus pseudo-class is used with input fields in the forms, when the user clicks on the input field it gets focus.
In this example, we will see how the style of the input field change when the user clicks on it.
This pseudo-class describes the disabled state of the element, you can add any style to the disabled element for a better representation. In the below example we will define specific color and opacity to the disabled element.
This pseudo-class is used to add styling when a user clicks on the element i.e. when the element is in an active state. It is generally used in navbar links that highlight which particular menu option is currently active.
In the below example we will show how a paragraph changes its color in an active state.
This pseudo-class matches whether the content entered by the user is valid or not. We apply it to the input elements to add error state styling when the input is invalid.
In the below example we will see how the invalid class comes into action when a user enters a name instead of an email in the email type input field.
Pseudo-classes | Description | Example |
---|---|---|
:valid | Matches every <input> element with valid content | input:valid |
:hover | Matches element on mouse hover | button:hover |
:focus | Matches <input> element when it is focused | input:focus |
:invalid | Matches every <input> element with invalid content | input:invalid |
:checked | Matches every checked <input> element | input:checked |
:active | Matches the active state of a link | a:active |
:visited | Matches the visited link | a:visited |
:disabled | Matches every disabled <input> element | input:disabled |
:enabled | Matches every enabled <input> element | input:enabled |
:required | Matches every required <input> element | input:required |
CSS Pseudo-classes
In this blog, we got to know about the concept of CSS Pseudo-class and how it is implemented. However, we discussed only a few important pseudo-classes. To explore more about CSS Pseudo-classes follow the below-mentioned link of MDN Docs.
MDN Docs : https://developer.mozilla.org/en-US/docs/Web/CSS/Pseudo-classes
For more updates on such topics, please follow our LinkedIn page- Front-end Studio.
Original article source at: https://blog.knoldus.com/
1669351647
Python Classes And Objects – Object Oriented Programming
After Stack Overflow predicted that by 2019, Python will outstrip other languages in terms of active developers, the demand for Certified Python Developers is only growing. Python follows object-oriented programming paradigm. It deals with declaring python classes, creating objects from them and interacting with the users. In an object-oriented language, the program is split into self-contained objects or you can say into several mini-programs. Each object is representing a different part of the application which can communicate among themselves.
In this python class blog, you will understand each aspect of classes and objects in the following sequence:
Let’s get started.:-)
A class in python is the blueprint from which specific objects are created. It lets you structure your software in a particular way. Here comes a question how? Classes allow us to logically group our data and function in a way that it is easy to reuse and a way to build upon if need to be. Consider the below image.
In the first image (A), it represents a blueprint of a house that can be considered as Class. With the same blueprint, we can create several houses and these can be considered as Objects. Using a class, you can add consistency to your programs so that they can be used in cleaner and efficient ways. The attributes are data members (class variables and instance variables) and methods which are accessed via dot notation.
Now, let us move ahead and see how it works in PyCharm. To get started, first have a look at the syntax of a python class.
Syntax:
class Class_name:
statement-1
.
.
statement-N
Here, the “class” statement creates a new class definition. The name of the class immediately follows the keyword “class” in python which is followed by a colon. To create a class in python, consider the below example:
class employee:
pass
#no attributes and methods
emp_1=employee()
emp_2=employee()
#instance variable can be created manually
emp_1.first='aayushi'
emp_1.last='Johari'
emp_1.email='aayushi@edureka.co'
emp_1.pay=10000
emp_2.first='test'
emp_2.last='abc'
emp_2.email='test@company.com'
emp_2.pay=10000
print(emp_1.email)
print(emp_2.email)
Output –
aayushi@edureka.co
test@company.com
Now, what if we don’t want to manually set these variables. You will see a lot of code and also it is prone to error. So to make it automatic, we can use “init” method. For that, let’s understand what exactly are methods and attributes in a python class.
Now creating a class is incomplete without some functionality. So functionalities can be defined by setting various attributes which acts as a container for data and functions related to those attributes. Functions in python are also called as Methods. Talking about the init method, it is a special function which gets called whenever a new object of that class is instantiated. You can think of it as initialize method or you can consider this as constructors if you’re coming from any another object-oriented programming background such as C++, Java etc. Now when we set a method inside a class, they receive instance automatically. Let’s go ahead with python class and accept the first name, last name and salary using this method.
class employee:
def __init__(self, first, last, sal):
self.fname=first
self.lname=last
self.sal=sal
self.email=first + '.' + last + '@company.com'
emp_1=employee('aayushi','johari',350000)
emp_2=employee('test','test',100000)
print(emp_1.email)
print(emp_2.email)
Now within our “init” method, we have set these instance variables (self, first, last, sal). Self is the instance which means whenever we write self.fname=first, it is same as emp_1.first=’aayushi’. Then we have created instances of employee class where we can pass the values specified in the init method. This method takes the instances as arguments. Instead of doing it manually, it will be done automatically now.
Next, we want the ability to perform some kind of action. For that, we will add a method to this class. Suppose I want the functionality to display the full name of the employee. So let’s us implement this practically.
class employee:
def __init__(self, first, last, sal):
self.fname=first
self.lname=last
self.sal=sal
self.email=first + '.' + last + '@company.com'
def fullname(self):
return '{}{}'.format(self.fname,self.lname)
emp_1=employee('aayushi','johari',350000)
emp_2=employee('test','test',100000)
print(emp_1.email)
print(emp_2.email)
print(emp_1.fullname())
print(emp_2.fullname())
Output –
aayushi.johari@company.com
test.test@company.com
aayushijohari
testtest
As you can see above, I have created a method called “full name” within a class. So each method inside a python class automatically takes the instance as the first argument. Now within this method, I have written the logic to print full name and return this instead of emp_1 first name and last name. Next, I have used “self” so that it will work with all the instances. Therefore to print this every time, we use a method.
Moving ahead with Python classes, there are variables which are shared among all the instances of a class. These are called as class variables. Instance variables can be unique for each instance like names, email, sal etc. Complicated? Let’s understand this with an example. Refer the code below to find out the annual rise in the salary.
class employee:
perc_raise =1.05
def __init__(self, first, last, sal):
self.fname=first
self.lname=last
self.sal=sal
self.email=first + '.' + last + '@company.com'
def fullname(self):
return '{}{}'.format(self.fname,self.lname)
def apply_raise(self):
self.sal=int(self.sal*1.05)
emp_1=employee('aayushi','johari',350000)
emp_2=employee('test','test',100000)
print(emp_1.sal)
emp_1.apply_raise()
print(emp_1.sal)
Output –
350000
367500
As you can see above, I have printed the salary first and then applied the 1.5% increase. In order to access these class variables, we either need to access them through the class or an instance of the class. Now, let’s understand the various attributes in a python class.
Attributes in Python defines a property of an object, element or a file. There are two types of attributes:
print(emp_1.__dict__)
After executing it, you will get output such as: {‘fname’: ‘aayushi’, ‘lname’: ‘johari’, ‘sal’: 350000, ’email’: ‘aayushi.johari@company.com’}
Next, we have public, protected and private attributes. Let’s understand them in detail:
Naming | Type | Meaning |
Name | Public | These attributes can be freely used inside or outside of a class definition |
_name | Protected | Protected attributes should not be used outside of the class definition, unless inside of a subclass definition |
__name | Private | This kind of attribute is inaccessible and invisible. It’s neither possible to read nor to write those attributes, except inside of the class definition itself |
Next, let’s understand the most important component in a python class i.e Objects.
As we have discussed above, an object can be used to access different attributes. It is used to create an instance of the class. An instance is an object of a class created at run-time.
To give you a quick overview, an object basically is everything you see around. For eg: A dog is an object of the animal class, I am an object of the human class. Similarly, there can be different objects to the same phone class. This is quite similar to a function call which we have already discussed. Let’s understand this with an example:
class MyClass:
def func(self):
print('Hello')
# create a new MyClass
ob = MyClass()
ob.func()
Moving ahead with python class, let’s understand the various OOPs concepts.
OOPs refers to the Object-Oriented Programming in Python. Well, Python is not completely object-oriented as it contains some procedural functions. Now, you must be wondering what is the difference between a procedural and object-oriented programming. To clear your doubt, in a procedural programming, the entire code is written into one long procedure even though it might contain functions and subroutines. It is not manageable as both data and logic get mixed together. But when we talk about object-oriented programming, the program is split into self-contained objects or several mini-programs. Each object is representing a different part of the application which has its own data and logic to communicate among themselves. For example, a website has different objects such as images, videos etc.
Object-Oriented programming includes the concept of Python class, object, Inheritance, Polymorphism, Abstraction etc. Let’s understand these topics in detail.
Inheritance allows us to inherit attributes and methods from the base/parent class. This is useful as we can create sub-classes and get all of the functionality from our parent class. Then we can overwrite and add new functionalities without affecting the parent class. Let’s understand the concept of parent class and child class with an example.
As we can see in the image, a child inherits the properties from the father. Similarly, in python, there are two classes:
1. Parent class ( Super or Base class)
2. Child class (Subclass or Derived class )
A class which inherits the properties is known as Child Class whereas a class whose properties are inherited is known as Parent class.
Inheritance refers to the ability to create Sub-classes that contain specializations of their parents. It is further divided into four types namely single, multilevel, hierarchical and multiple inheritances. Refer the below image to get a better understanding.
Let’s go ahead with python class and understand how inheritance is useful.
Say, I want to create classes for the types of employees. I’ll create ‘developers’ and ‘managers’ as sub-classes since both developers and managers will have a name, email and salary and all these functionalities will be there in the employee class. So, instead of copying the code for the subclasses, we can simply reuse the code by inheriting from the employee.
class employee:
num_employee=0
raise_amount=1.04
def __init__(self, first, last, sal):
self.first=first
self.last=last
self.sal=sal
self.email=first + '.' + last + '@company.com'
employee.num_employee+=1
def fullname (self):
return '{} {}'.format(self.first, self.last)
def apply_raise (self):
self.sal=int(self.sal * raise_amount)
class developer(employee):
pass
emp_1=developer('aayushi', 'johari', 1000000)
print(emp_1.email)
Output - aayushi.johari@company.com
As you can see in the above output, all the details of the employee class are available in the developer class. Now what if I want to change the raise_amount for a developer to 10%? let’s see how it can be done practically.
class employee:
num_employee=0
raise_amount=1.04
def __init__(self, first, last, sal):
self.first=first
self.last=last
self.sal=sal
self.email=first + '.' + last + '@company.com'
employee.num_employee+=1
def fullname (self):
return '{} {}'.format(self.first, self.last)
def apply_raise (self):
self.sal=int(self.sal* raise_amount)
class developer(employee):
raise_amount = 1.10
emp_1=developer('aayushi', 'johari', 1000000)
print(emp_1.raise_amount)
Output - 1.1
As you can see that it has updated the percentage rise in salary from 4% to 10%. Now if I want to add one more attribute, say a programming language in our init method, but it doesn’t exist in our parent class. Is there any solution for that? Yes! we can copy the entire employee logic and do that but it will again increase the code size. So to avoid that, let’s consider the below code:
class employee:
num_employee=0
raise_amount=1.04
def __init__(self, first, last, sal):
self.first=first
self.last=last
self.sal=sal
self.email=first + '.' + last + '@company.com'
employee.num_employee+=1
def fullname (self):
return '{} {}'.format(self.first, self.last)
def apply_raise (self):
self.sal=int(self.sal* raise_amount)
class developer(employee):
raise_amount = 1.10
def __init__(self, first, last, sal, prog_lang):
super().__init__(first, last, sal)
self.prog_lang=prog_lang
emp_1=developer('aayushi', 'johari', 1000000, 'python')
print(emp_1.prog_lang)
Therefore, with just a little bit of code, I have made changes. I have used super.__init__(first, last, pay) which inherits the properties from the base class. To conclude, inheritance is used to reuse the code and reduce the complexity of a program.
Polymorphism in Computer Science is the ability to present the same interface for differing underlying forms. In practical terms, polymorphism means that if class B inherits from class A, it doesn’t have to inherit everything about class A, it can do some of the things that class A does differently. It is most commonly used while dealing with inheritance. Python is implicitly polymorphic, it has the ability to overload standard operators so that they have appropriate behaviour based on their context.
Let us understand with an example:
class Animal:
def __init__(self,name):
self.name=name
def talk(self):
pass
class Dog(Animal):
def talk(self):
print('Woof')
class Cat(Animal):
def talk(self):
print('MEOW!')
c= Cat('kitty')
c.talk()
d=Dog(Animal)
d.talk()
Output –
Meow!
Woof
Next, let us move to another object-oriented programming concept i.e Abstraction.
Abstraction is used to simplify complex reality by modelling classes appropriate to the problem. Here, we have an abstract class which cannot be instantiated. This means you cannot create objects or instances for these classes. It can only be used for inheriting certain functionalities which you call as a base class. So you can inherit functionalities but at the same time, you cannot create an instance of this particular class. Let’s understand the concept of abstract class with an example below:
from abc import ABC, abstractmethod
class Employee(ABC):
@abstractmethod
def calculate_salary(self,sal):
pass
class Developer(Employee):
def calculate_salary(self,sal):
finalsalary= sal*1.10
return finalsalary
emp_1 = Developer()
print(emp_1.calculate_salary(10000))
Output –
11000.0
As you can see in the above output, we have increased the base salary to 10% i.e. the salary is now 11000. Now, if you actually go on and make an object of class “Employee”, it throws you an error as python doesn’t allow you to create an object of abstract class. But using inheritance, you can actually inherit the properties and perform the respective tasks.
So guys, this was all about python classes and objects in a nutshell. We have covered all the basics of Python class, objects and various object-oriented concepts in python, so you can start practicing now. I hope you guys enjoyed reading this blog on “Python Class” and are clear about each and every aspect that I have discussed above. After python class, I will be coming up with more blogs on Python for scikit learn library and array. Stay tuned!
Got a question for us? Please mention it in the comments section of this “Python Class” blog and we will get back to you as soon as possible.
To get in-depth knowledge of Python along with its various applications, you can enroll here with our live online training with 24/7 support and lifetime access.
Original article source at: https://www.edureka.co/
1667965068
In this Typescript Classes tutorial, you will learn how to create and extend classes, implement interfaces, apply visibility modifiers, use static class members, and create getters and setters in TS classes.
TypeScript Classes Tutorial | TS for Beginners Lesson
(00:00) Intro
(00:05) Welcome
(00:28) Starter Code
(01:06) Basic Class
(02:34) Larger Class
(04:23) Visibility Modifiers
(06:24) Definite Assignment Assertion Operator
(07:22) Private & Protected Examples
(10:26) Compiling & Running Code
(12:02) Extends for Subclasses
(16:28) Implements for Interfaces
(20:31) Static Class Members
(24:52) Getters & Setters
#typescript #classes #interface
1666005484
The following is the CBSE Syllabus for the Class 9 Maths Term 1 & Term 2 exam. Understanding the CBSE Class 9 Math syllabus will help students to develop an effective learning program. In addition, students will know the important topics that are expected to be asked in the test.
#cbsesyllabusclass9 #classes #mathematic #cbsesyllabus
#maths
1663744170
The concept of measurement is the comparison of an object's physical characteristics to a reference. The Class 11 Physics Chapter 2 Notes provide a thorough understanding of all measurement types, units, dimensions, and measurement defects.
https://www.pw.live/physics-questions-mcq/units-and-measurements
#classes #physics #class11notes #ncertsolutions #science #measurement
1660354980
library("devtools"); install_github("lme4/lme4",dependencies=TRUE)
(This requires devtools
>= 1.6.1, and installs the "master" (development) branch.) This approach builds the package from source, i.e. make
and compilers must be installed on your system -- see the R FAQ for your operating system; you may also need to install dependencies manually. Specify build_vignettes=FALSE
if you have trouble because your system is missing some of the LaTeX/texi2dvi
tools.
lme4
r-forge repository:install.packages("lme4",
repos=c("http://lme4.r-forge.r-project.org/repos",
getOption("repos")[["CRAN"]]))
(these source and binary versions are updated manually, so may be out of date; if you believe they are, please contact the maintainers).
It is possible to install (but not easily to check) lme4
at least as recently as 1.1-7.
Rcpp
0.10.5, RcppEigen
3.2.0.2--no-inst
; this is necessary in order to prevent R from getting hung up by the knitr
-based vignettesR CMD check
is difficult, but possible if you hand-copy the contents of the inst
directory into the installed package directory ...lme4.0
lme4.0
is a maintained version of lme4 back compatible to CRAN versions of lme4 0.99xy, mainly for the purpose of reproducible research and data analysis which was done with 0.99xy versions of lme4.lme4.0
on R version 3.1; if someone has a specific reproducible example they'd like to donate, please contact the maintainers.lme4.0
features getME(<mod>, "..")
which is compatible (as much as sensibly possible) with the current lme4
's version of getME()
.convert_old_lme4()
function to take a fitted object created with lme4
<1.0 and convert it for use with lme4.0
.install.packages("lme4.0",
repos=c("http://lme4.r-forge.r-project.org/repos",
getOption("repos")[["CRAN"]]))
(if the binary versions are out of date or unavailable for your system, please contact the maintainers).
lme4
usage and more general mixed model questions; please read the info page, and subscribe, before posting ... (note that the mailing list does not support images or large/non-text attachments)If you choose to support lme4
development financially, you can contribute to a fund at McMaster University (home institution of one of the developers) here. The form will say that you are donating to the "Global Coding Fund"; this fund is available for use by the developers, under McMaster's research spending rules. We plan to use the funds, as available, to pay students to do maintenance and development work. There is no way to earmark funds or set up a bounty to direct funding toward particular features, but you can e-mail the maintainers and suggest priorities for your donation.
Author: lme4
Source Code: https://github.com/lme4/lme4
License: View license
1659664080
计算机视觉中最常见的工作之一是对象检测。它是理解和与场景互动的基础。
对象检测可用于从检测物品等简单应用到自动驾驶汽车等复杂工作的所有领域,以了解各种场景并根据它们做出判断。安全摄像头甚至当前的手机都内置了类似的功能,可实现多种功能。
如今,YOLO(You Only Look Once)是更好的目标检测模型框架,并且该模型是 YOLO 模型家族的最新成员。YOLO 是第一个将边界框预测和对象分类合并到单个端到端可微网络中的对象检测模型。它是在 Darknet 框架下创建和维护的。YOLOv5 是第一个在 PyTorch 框架上编写的 YOLO 模型,它更轻量级,更易于使用。然而,YOLOv7 在标准基准 COCO 数据集上的表现并没有优于 YOLOv6,因为它没有对 YOLOv6 中的网络进行基本的架构改进。
Yolo v6 有一些缺陷,例如在微小项目上的性能不佳以及对象尺寸不相等时的泛化能力差。
这是原始 YOLO 论文中展示 YOLO 操作的图片。从那时起它已经走了很长一段路,我们现在是第 5 版。尽管它不是由任何原始作者或贡献者编写的,但它遵循相同的基本策略。它是用 PyTorch 编写的,这是一个优点。在这个版本的 Yolo 马赛克中,使用了增强,增强和不同的缩放方法提供了许多增强功能。
要让您的目标检测器启动并运行,您必须首先收集训练照片。您应该仔细考虑您尝试完成的活动,并提前计划您的模型可能发现困难的任务组件。为了提高最终模型的准确性,我建议尽可能减少模型必须处理的域。
对于 YOLOv7 自定义训练,我们需要开发一个数据集。如果您没有任何数据,可以使用openimages数据库。
使用LabelImg或任何注释工具来注释数据集。创建一个与图像和注释文本同名的文件。
准备一个集合,例如,对应于
YOLOv7 接受以下格式的文本 (.txt) 文件中的标签数据:
标记数据后,我们会将其分为训练和测试文件夹。拆分比例将由用户决定,但最常见的拆分是 (80-20)%,这意味着 80% 的数据用于训练,20% 用于测试。*图像和标签存储在规定的文件夹架构中。
*对于数据拆分,请查看 python 库 - 拆分文件夹,它将您的数据随机划分为训练、测试和验证。
以下 pip 命令安装数据拆分库
pip 安装拆分文件夹
输入文件夹的格式应如下:
为了向您提供这个:
将文件分成训练集和验证集(以及可选的测试集)。在进入 YOLOv7 训练之前,最终的数据集文件夹如下所示,
├── yolov7
## └── train
####└── images(包含所有训练图像的文件夹)
####└── labels(包含所有训练标签的文件夹)
## └── test
#### └── images(包含所有测试图像的文件夹)
####└── labels(包含所有测试标签的文件夹)
## └──有效
####└── images(包含所有有效图像的文件夹)
### #└── 标签(包含所有有效标签的文件夹)
我们现在必须开发一个定制的配置文件。(请务必指定正确的目录),因为训练过程将完全依赖于该文件。
在 (yolov7/data) 文件夹中创建一个名为“custom.yaml”的文件。在该文件中,粘贴下面的代码。设置数据集文件夹的正确路径,更改类的数量及其名称,然后保存。
创建一个指定训练配置的文件。在 custom.yaml文件中,编写以下内容:
train: (Complete path to dataset train folder)
test: (Complete path to dataset test folder)
valid: (Complete path to dataset valid folder)
#Classes
nc: 1 # replace classes count
#classes names
#replace all class names list with your custom classes
namesnames: ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear']
完成所有预处理程序后,就可以开始训练了。在主“ yolov7 ”中启动终端,激活虚拟环境,然后执行下面列出的命令。
git clone https://github.com/WongKinYiu/yolov7.git # clone
cd yolov7
pip install -r requirements.txt # install modules
wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt # download pretrained weight
官方存储库包含模型的预训练权重。
备注:
根据使用环境,如 Google Colab,GPU 内存可能不足。在这种情况下,您可以通过减少批量大小来学习。
python train.py --weights yolov7.pt --data "data/custom.yaml" --workers 4 --batch-size 4 --img 416 --cfg cfg/training/yolov7.yaml --name yolov7 --hyp data/hyp.scratch.p5.yaml
— img = 模型将训练的图像大小,默认值为 640。
— 批量大小= 用于自定义数据集训练的批量大小。
— epochs = 获得最佳模型的训练 epoch 数
— 数据= 自定义配置文件路径
— 权重= 预训练的 yolov7 权重 ( yolov7.pt )
注意:如果任何图像损坏,训练将不会开始。如果任何标签文件损坏,训练将不会开始,因为 yolov7 将忽略该图像和标签文件。
等待训练完成,然后再使用新形成的权重进行推理。自定义训练的权重将保存在以下文件夹路径中。
[yolov7/runs/train/yolov7/weights/best.pt]
训练结束后,转到终端并执行下面列出的命令以检测自定义权重。
python detect.py --weights runs/train/yolov7/weights/best.pt --source "path to your testing image"
您可以使用 YOLO 为您想要的任何东西设计自己的自定义检测模型。
Yolo v7 在速度和准确性方面取得了重大进步,它可以匹配甚至优于基于 RPN 的模型。该模型快速可靠,现在可以用于任何事情。
这就是“使用自定义数据训练 YOLOv7”的全部内容。您可以使用自己的数据进行试验。YOLOv7 轻量级且易于使用。YOLO v7 训练很快,得出很好的结论,并且表现很好。
上述 YOLOV7 的主要经验总结如下:
1659659400
Uno de los trabajos más comunes en la visión artificial es la detección de objetos. Es la base para comprender e interactuar con la escena.
La detección de objetos se usa en todo, desde aplicaciones simples como la detección de artículos hasta trabajos complicados como automóviles autónomos para comprender diversos escenarios y emitir juicios basados en ellos. Las cámaras de seguridad e incluso los teléfonos celulares actuales tienen características similares integradas para una variedad de funciones.
Hoy en día, YOLO (You Only Look Once) es el mejor marco de modelo de detección de objetos , y este modelo es la última incorporación a la familia de modelos YOLO. YOLO fue el primer modelo de detección de objetos que incorporó la predicción de cuadros delimitadores y la clasificación de objetos en una única red diferenciable de extremo a extremo. Fue creado y se mantiene bajo el marco Darknet. YOLOv5 es el primer modelo de YOLO escrito en el marco PyTorch, y es mucho más ligero y fácil de usar. Sin embargo, YOLOv7 no supera a YOLOv6 en un punto de referencia estándar, el conjunto de datos COCO, porque no realizó mejoras fundamentales en la arquitectura de la red en YOLOv6.
Yolo v6 tiene algunas fallas, como un rendimiento deficiente en elementos pequeños y una generalización deficiente cuando las dimensiones de los objetos no son iguales.
Esta es una imagen del artículo original de YOLO que demuestra la operación de YOLO. Ha recorrido un largo camino desde entonces, y ahora estamos en la versión 5. A pesar de que no fue escrito por ninguno de los autores o colaboradores originales, sigue la misma estrategia básica. Está escrito en PyTorch, lo cual es una ventaja. En esta versión del mosaico de Yolo, se utiliza el aumento, y el aumento y diferentes enfoques de escala proporcionan numerosas mejoras.
Para poner en funcionamiento su detector de objetos, primero debe recopilar fotografías de entrenamiento. Debe pensar detenidamente en la actividad que está intentando completar y planificar con anticipación los componentes de la tarea que su modelo puede encontrar difíciles. Para mejorar la precisión de su modelo final, recomiendo reducir el dominio que debe manejar su modelo tanto como sea posible.
Para el entrenamiento personalizado de YOLOv7, necesitamos desarrollar un conjunto de datos. Si no tiene ningún dato, puede usar la base de datos de imágenes abiertas.
Use LabelImg o cualquier herramienta de anotación para anotar el conjunto de datos. Cree un archivo con el mismo nombre que la imagen y el texto de la anotación.
Prepare un conjunto, por ejemplo, correspondiente a
YOLOv7 acepta datos de etiquetas en archivos de texto (.txt) en el siguiente formato:
Una vez que haya etiquetado sus datos, los dividiremos en carpetas de entrenamiento y prueba. El usuario determinará la relación de división; sin embargo, la división más común es (80-20) por ciento, lo que implica que el 80 por ciento de los datos se utiliza para entrenamiento y el 20 por ciento para pruebas. *Las imágenes y etiquetas se almacenan en la arquitectura de carpetas indicada.
*Para la división de datos, busque en la biblioteca de python - Carpeta dividida, que dividirá aleatoriamente sus datos en entrenamiento, prueba y validación.
El siguiente comando pip para instalar la biblioteca de división de datos
pip instalar carpetas divididas
La carpeta de entrada debe formatearse de la siguiente manera:
Para proporcionarle esto:
Separe los archivos en un conjunto de entrenamiento y validación (y, opcionalmente, un conjunto de prueba). La carpeta del conjunto de datos final se ve a continuación antes de ingresar al entrenamiento de YOLOv7,
├── yolov7
## └── tren
####└── imágenes (carpeta que incluye todas las imágenes de entrenamiento)
####└── etiquetas (carpeta que incluye todas las etiquetas de entrenamiento)
## └── prueba
#### └── imágenes (carpeta que incluye todas las imágenes de prueba)
####└── etiquetas (carpeta que incluye todas las etiquetas de prueba)
## └── valid
####└── imágenes (carpeta que incluye todas las imágenes válidas)
### #└── etiquetas (carpeta que incluye todas las etiquetas válidas)
Ahora debemos desarrollar un archivo de configuración personalizado. (Asegúrese de especificar el directorio adecuado), ya que el proceso de capacitación dependerá completamente de ese archivo.
Cree un archivo con el nombre "custom.yaml" en la carpeta (yolov7/data). En ese archivo, pegue el código a continuación. Establezca la ruta correcta a la carpeta del conjunto de datos, modifique la cantidad de clases y sus nombres, y luego guárdelo.
Cree un archivo que especifique la configuración de entrenamiento. En el archivo custom.yaml , escribe lo siguiente:
train: (Complete path to dataset train folder)
test: (Complete path to dataset test folder)
valid: (Complete path to dataset valid folder)
#Classes
nc: 1 # replace classes count
#classes names
#replace all class names list with your custom classes
namesnames: ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear']
Después de que se hayan completado todos los procedimientos de preprocesamiento, está listo para comenzar el entrenamiento. Inicie el terminal en el " yolov7 " principal, active el entorno virtual y ejecute los comandos que se enumeran a continuación.
git clone https://github.com/WongKinYiu/yolov7.git # clone
cd yolov7
pip install -r requirements.txt # install modules
wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt # download pretrained weight
El repositorio oficial contiene pesos preentrenados para tu modelo.
Observaciones:
Según el contexto de uso, como Google Colab, la memoria GPU puede ser insuficiente. Puede aprender reduciendo el tamaño del lote en esa situación.
python train.py --weights yolov7.pt --data "data/custom.yaml" --workers 4 --batch-size 4 --img 416 --cfg cfg/training/yolov7.yaml --name yolov7 --hyp data/hyp.scratch.p5.yaml
— img = tamaño de las imágenes en las que se entrenará el modelo, el valor predeterminado es 640.
— tamaño de lote = tamaño de lote utilizado para el entrenamiento de conjuntos de datos personalizados.
— épocas = número de épocas de entrenamiento para obtener el mejor modelo
— datos = ruta del archivo de configuración personalizada
— pesos = pesos yolov7 preentrenados ( yolov7.pt )
Nota : si alguna imagen está dañada, el entrenamiento no comenzará. Si algún archivo de etiqueta está dañado, el entrenamiento no comenzará porque yolov7 ignorará esa imagen y etiquetará los archivos.
Espere a que finalice el entrenamiento antes de realizar inferencias con pesos recién formados. Los pesos entrenados personalizados se guardarán en la siguiente ruta de la carpeta.
[yolov7/runs/train/yolov7/weights/best.pt]
Cuando finalice el entrenamiento, vaya a la terminal y ejecute el comando que se indica a continuación para la detección de pesos personalizados.
python detect.py --weights runs/train/yolov7/weights/best.pt --source "path to your testing image"
Puede usar YOLO para diseñar su propio modelo de detección personalizado para cualquier cosa que desee.
Yolo v7 es un avance significativo en términos de velocidad y precisión, e iguala o incluso supera a los modelos basados en RPN. El modelo es rápido y confiable, y ahora se puede usar para cualquier cosa.
Eso es todo lo que hay que hacer para "Entrenar a YOLOv7 con datos personalizados". Puede experimentar con sus propios datos. YOLOv7 es liviano y fácil de usar. YOLO v7 entrena rápidamente, saca buenas conclusiones y funciona bien.
Las lecciones clave del mencionado YOLOV7 se resumen a continuación:
1659658440
Um dos trabalhos mais comuns em visão computacional é a detecção de objetos. É a base para compreender e interagir com a cena.
A detecção de objetos é usada em tudo, desde aplicativos simples, como detectar itens, até trabalhos complicados, como automóveis autônomos, para entender diversos cenários e fazer julgamentos com base neles. Câmeras de segurança e até mesmo telefones celulares atuais têm recursos semelhantes integrados para uma variedade de funções.
Atualmente, YOLO (You Only Look Once) é a melhor estrutura de modelo de detecção de objetos , e esse modelo é a mais recente adição à família de modelos YOLO. O YOLO foi o primeiro modelo de detecção de objetos a incorporar previsão de caixa delimitadora e classificação de objetos em uma única rede diferenciável de ponta a ponta. Ele foi criado e é mantido sob a estrutura Darknet. YOLOv5 é o primeiro modelo YOLO escrito no framework PyTorch, e é muito mais leve e fácil de usar. No entanto, o YOLOv7 não supera o YOLOv6 em um benchmark padrão, o conjunto de dados COCO, porque não fez melhorias arquiteturais fundamentais na rede no YOLOv6.
O Yolo v6 tem algumas falhas, como baixo desempenho em itens pequenos e má generalização quando as dimensões dos objetos não são iguais.
Esta é uma imagem do documento YOLO original demonstrando a operação YOLO. Ele percorreu um longo caminho desde então, e agora estamos na versão 5. Apesar de não ter sido escrito por nenhum dos autores ou contribuidores originais, segue a mesma estratégia básica. Está escrito em PyTorch, o que é uma vantagem. Nesta versão do mosaico Yolo, o aumento é usado, e o aumento e diferentes abordagens de dimensionamento fornecem vários aprimoramentos.
Para colocar seu detector de objetos em funcionamento, você deve primeiro coletar fotografias de treinamento. Você deve pensar cuidadosamente sobre a atividade que está tentando concluir e planejar com antecedência os componentes da tarefa que seu modelo pode achar difícil. Para melhorar a precisão de seu modelo final, recomendo reduzir o domínio que seu modelo deve manipular tanto quanto possível.
Para treinamento personalizado YOLOv7, precisamos desenvolver um conjunto de dados. Se você não tiver nenhum dado, poderá usar o banco de dados openimages .
Use LabelImg ou qualquer ferramenta de anotação para anotar o conjunto de dados. Crie um arquivo com o mesmo nome da imagem e do texto da anotação.
Prepare um conjunto, por exemplo, correspondente a
YOLOv7 aceita dados de etiquetas em arquivos de texto (.txt) no seguinte formato:
Depois de marcar seus dados, vamos dividi-los em pastas de treinamento e teste. A proporção de divisão será determinada pelo usuário, porém a divisão mais comum é (80-20) por cento, o que implica que 80 por cento dos dados são utilizados para treinamento e 20 por cento para teste. *As imagens e rótulos são armazenados na arquitetura de pasta indicada.
*Para divisão de dados, procure na biblioteca python – Split Folder, que dividirá aleatoriamente seus dados em treinar, testar e validar.
O seguinte comando pip para instalar a biblioteca de divisão de dados
pip instalar pastas divididas
A pasta de entrada deve ser formatada da seguinte forma:
Para lhe fornecer isso:
Separe os arquivos em um conjunto de treinamento e validação (e opcionalmente um conjunto de teste). A pasta do conjunto de dados final parece abaixo antes de entrar no treinamento YOLOv7,
├── yolov7
## └── train
####└── imagens (pasta incluindo todas as imagens de treinamento)
####└── labels (pasta incluindo todos os rótulos de treinamento)
## └── test
#### └── imagens (pasta incluindo todas as imagens de teste)
####└── labels (pasta incluindo todas as etiquetas de teste)
## └── valid
####└── imagens (pasta incluindo todas as imagens válidas)
### #└── rótulos (pasta incluindo todos os rótulos válidos)
Agora devemos desenvolver um arquivo de configuração personalizado. (Certifique-se de especificar o diretório apropriado), pois o processo de treinamento será totalmente dependente desse arquivo.
Crie um arquivo com o nome “custom.yaml” na pasta (yolov7/data). Nesse arquivo, cole o código abaixo. Defina o caminho correto para a pasta do conjunto de dados, altere o número de classes e seus nomes e salve-o.
Crie um arquivo que especifique a configuração de treinamento. No arquivo custom.yaml , escreva o seguinte:
train: (Complete path to dataset train folder)
test: (Complete path to dataset test folder)
valid: (Complete path to dataset valid folder)
#Classes
nc: 1 # replace classes count
#classes names
#replace all class names list with your custom classes
namesnames: ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear']
Depois que todos os procedimentos de pré-processamento forem concluídos, ele estará pronto para iniciar o treinamento. Inicie o terminal no “ yolov7 ” principal, ative o ambiente virtual e execute os comandos listados abaixo.
git clone https://github.com/WongKinYiu/yolov7.git # clone
cd yolov7
pip install -r requirements.txt # install modules
wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt # download pretrained weight
O repositório oficial contém pesos pré-treinados para seu modelo.
Observações:
Dependendo do contexto de uso, como o Google Colab, a memória da GPU pode ser insuficiente. Você pode aprender reduzindo o tamanho do lote nessa situação.
python train.py --weights yolov7.pt --data "data/custom.yaml" --workers 4 --batch-size 4 --img 416 --cfg cfg/training/yolov7.yaml --name yolov7 --hyp data/hyp.scratch.p5.yaml
— img = tamanho das imagens nas quais o modelo será treinado, o valor padrão é 640.
— batch-size = tamanho do lote usado para treinamento de conjunto de dados personalizado.
— épocas = número de épocas de treinamento para obter o melhor modelo
— data = caminho do arquivo de configuração personalizado
— pesos = pesos yolov7 pré-treinados ( yolov7.pt )
Nota : Se alguma imagem estiver corrompida, o treinamento não começará. Se algum arquivo de rótulo estiver corrompido, o treinamento não começará porque o yolov7 ignorará essa imagem e os arquivos de rótulo.
Aguarde o término do treinamento antes de realizar a inferência com pesos recém-formados. Os pesos treinados sob medida serão salvos no seguinte caminho de pasta.
[yolov7/runs/train/yolov7/weights/best.pt]
Quando o treinamento terminar, vá para o terminal e execute o comando listado abaixo para detecção de pesos personalizados.
python detect.py --weights runs/train/yolov7/weights/best.pt --source "path to your testing image"
Você pode usar o YOLO para projetar seu próprio modelo de detecção personalizado para qualquer coisa que desejar.
O Yolo v7 é um avanço significativo em termos de velocidade e precisão e corresponde ou até supera os modelos baseados em RPN. O modelo é rápido e confiável, e agora pode ser usado para qualquer coisa.
Isso é tudo o que há para “Treinar YOLOv7 em dados personalizados”. Você pode experimentar com seus próprios dados.YOLOv7 é leve e simples de usar. O YOLO v7 treina rapidamente, tira boas conclusões e tem um bom desempenho.
As principais lições do YOLOV7 acima mencionado são resumidas da seguinte forma:
1659658320
L'une des tâches les plus courantes en vision par ordinateur est la détection d'objets. C'est la base pour comprendre et interagir avec la scène.
La détection d'objets est utilisée dans tout, des applications simples comme la détection d'objets aux tâches complexes comme les automobiles autonomes pour comprendre divers scénarios et porter des jugements en fonction de ceux-ci. Les caméras de sécurité et même les téléphones portables actuels ont des fonctionnalités similaires intégrées pour une variété de fonctions.
De nos jours, YOLO (You Only Look Once) est le meilleur cadre de modèle de détection d'objets , et ce modèle est le dernier ajout à la famille de modèles YOLO. YOLO a été le premier modèle de détection d'objets à intégrer la prédiction de la boîte englobante et la classification des objets dans un seul réseau différentiable de bout en bout. Il a été créé et est maintenu sous le framework Darknet. YOLOv5 est le premier modèle YOLO écrit sur le framework PyTorch, et il est beaucoup plus léger et plus facile à utiliser. Cependant, YOLOv7 ne surpasse pas YOLOv6 sur un benchmark standard, l'ensemble de données COCO, car il n'a pas apporté d'améliorations architecturales fondamentales au réseau dans YOLOv6.
Yolo v6 présente quelques défauts, tels que de mauvaises performances sur des objets minuscules et une mauvaise généralisation lorsque les dimensions des objets ne sont pas égales.
Ceci est une image de l'article YOLO original démontrant le fonctionnement de YOLO. Il a parcouru un long chemin depuis lors, et nous sommes maintenant sur la version 5. Malgré le fait qu'il n'a été écrit par aucun des auteurs ou contributeurs originaux, il suit la même stratégie de base. Il est écrit en PyTorch, ce qui est un plus. Dans cette version de la mosaïque Yolo, l'augmentation est utilisée, et l'augmentation et différentes approches de mise à l'échelle fournissent de nombreuses améliorations.
Pour que votre détecteur d'objets soit opérationnel, vous devez d'abord collecter des photographies d'entraînement. Vous devez réfléchir attentivement à l'activité que vous tentez de réaliser et planifier à l'avance les composants de la tâche que votre modèle peut trouver difficiles. Pour améliorer la précision de votre modèle final, je vous recommande de réduire autant que possible le domaine que votre modèle doit gérer.
Pour la formation personnalisée YOLOv7, nous devons développer un ensemble de données. Si vous n'avez pas de données, vous pouvez utiliser la base de données openimages .
Utilisez LabelImg ou n'importe quel outil d'annotation pour annoter l'ensemble de données. Créez un fichier portant le même nom que l'image et le texte d'annotation.
Préparez un ensemble, par exemple, correspondant à
YOLOv7 accepte les données d'étiquette dans les fichiers texte (.txt) au format suivant :
Après avoir étiqueté vos données, nous les diviserons en dossiers d'apprentissage et de test. Le rapport de division sera déterminé par l'utilisateur, mais la division la plus courante est (80-20) %, ce qui implique que 80 % des données sont utilisées pour la formation et 20 % pour les tests. *Les images et les étiquettes sont stockées dans l'architecture de dossier indiquée.
* Pour le fractionnement des données, consultez la bibliothèque python - Split Folder, qui divisera au hasard vos données en train, test et validation.
La commande pip suivante pour installer la bibliothèque de fractionnement de données
pip installer des dossiers divisés
Le dossier d'entrée doit être formaté comme suit :
Afin de vous fournir ceci :
Séparez les fichiers en un ensemble d'entraînement et un ensemble de validation (et éventuellement un ensemble de test). Le dossier de l'ensemble de données final ressemble à ci-dessous avant d'entrer dans la formation YOLOv7,
├── yolov7
## └── train
####└── images (dossier contenant toutes les images d'entraînement)
####└── labels (dossier contenant toutes les étiquettes d'entraînement)
## └── test
#### └── images (dossier contenant toutes les images de test)
####└── labels (dossier contenant toutes les étiquettes de test)
## └── valid
####└── images (dossier contenant toutes les images valides)
### #└── étiquettes (dossier contenant toutes les étiquettes valides)
Nous devons maintenant développer un fichier de configuration personnalisé. (Assurez-vous de spécifier le répertoire approprié), car le processus de formation dépendra entièrement de ce fichier.
Créez un fichier avec le nom "custom.yaml" dans le dossier (yolov7/data). Dans ce fichier, collez le code ci-dessous. Définissez le chemin d'accès correct au dossier du jeu de données, modifiez le nombre de classes et leurs noms, puis enregistrez-le.
Créez un fichier qui spécifie la configuration de la formation. Dans le fichier custom.yaml , écrivez ce qui suit :
train: (Complete path to dataset train folder)
test: (Complete path to dataset test folder)
valid: (Complete path to dataset valid folder)
#Classes
nc: 1 # replace classes count
#classes names
#replace all class names list with your custom classes
namesnames: ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear']
Une fois toutes les procédures de prétraitement terminées, il est prêt à commencer la formation. Lancez le terminal dans le principal " yolov7 ", activez l'environnement virtuel et exécutez les commandes répertoriées ci-dessous.
git clone https://github.com/WongKinYiu/yolov7.git # clone
cd yolov7
pip install -r requirements.txt # install modules
wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt # download pretrained weight
Le référentiel officiel contient des poids pré-entraînés pour votre modèle.
Remarques :
Selon le contexte d'utilisation, tel que Google Colab, la mémoire GPU peut être insuffisante. Vous pouvez apprendre en réduisant la taille du lot dans cette situation.
python train.py --weights yolov7.pt --data "data/custom.yaml" --workers 4 --batch-size 4 --img 416 --cfg cfg/training/yolov7.yaml --name yolov7 --hyp data/hyp.scratch.p5.yaml
— img = taille des images sur lesquelles le modèle s'entraînera, la valeur par défaut est 640.
— batch-size = taille de lot utilisée pour la formation de l'ensemble de données personnalisé.
— époques = nombre d'époques d'entraînement pour obtenir le meilleur modèle
— data = chemin du fichier de configuration personnalisé
— poids = poids yolov7 pré-entraînés ( yolov7.pt )
Remarque : Si une image est corrompue, la formation ne commencera pas. Si un fichier d'étiquette est corrompu, la formation ne commencera pas car yolov7 ignorera cette image et ces fichiers d'étiquette.
Attendez la fin de l'entraînement avant d'effectuer une inférence avec des poids fraîchement formés. Les poids formés sur mesure seront enregistrés dans le chemin de dossier suivant.
[yolov7/runs/train/yolov7/weights/best.pt]
Une fois l'entraînement terminé, accédez au terminal et exécutez la commande ci-dessous pour la détection sur les poids personnalisés.
python detect.py --weights runs/train/yolov7/weights/best.pt --source "path to your testing image"
Vous pouvez utiliser YOLO pour concevoir votre propre modèle de détection personnalisé pour tout ce que vous désirez.
Yolo v7 est une avancée significative en termes de vitesse et de précision, et il correspond ou même surpasse les modèles basés sur RPN. Le modèle est rapide et fiable, et il peut maintenant être utilisé pour n'importe quoi.
C'est tout ce qu'il y a à faire pour "former YOLOv7 sur des données personnalisées". Vous pouvez expérimenter avec vos propres données.YOLOv7 est léger et simple à utiliser. YOLO v7 s'entraîne rapidement, tire de bonnes conclusions et fonctionne bien.
Les principaux enseignements tirés du YOLOV7 susmentionné sont résumés comme suit :
1659654660
Một trong những công việc phổ biến nhất trong thị giác máy tính là phát hiện đối tượng. Nó là nền tảng để hiểu và tương tác với cảnh.
Tính năng phát hiện đối tượng được sử dụng trong mọi thứ, từ các ứng dụng đơn giản như phát hiện vật phẩm đến các công việc phức tạp như ô tô tự lái để hiểu các tình huống đa dạng và đưa ra phán đoán dựa trên chúng. Camera an ninh và thậm chí cả điện thoại di động hiện nay đều có các tính năng tương tự được tích hợp sẵn cho nhiều chức năng khác nhau.
Ngày nay, YOLO (You Only Look Once) là khung mô hình phát hiện đối tượng tốt hơn và mô hình này là sự bổ sung mới nhất cho dòng mô hình YOLO. YOLO là mô hình phát hiện đối tượng đầu tiên kết hợp dự đoán hộp giới hạn và phân loại đối tượng vào một mạng có thể phân biệt từ đầu đến cuối duy nhất. Nó được tạo ra và được duy trì trong khuôn khổ Darknet. YOLOv5 là mô hình YOLO đầu tiên được viết trên khung công tác PyTorch, nó nhẹ hơn và dễ sử dụng hơn nhiều. Tuy nhiên, YOLOv7 không vượt trội hơn YOLOv6 trên điểm chuẩn tiêu chuẩn, tập dữ liệu COCO, vì nó không thực hiện các cải tiến kiến trúc cơ bản cho mạng trong YOLOv6.
Yolo v6 có một vài sai sót, chẳng hạn như hiệu suất kém trên các mục nhỏ và khả năng tổng quát kém khi kích thước của các đối tượng không bằng nhau.
Đây là hình ảnh từ giấy YOLO gốc thể hiện hoạt động của YOLO. Nó đã trải qua một chặng đường dài kể từ đó, và chúng tôi hiện đang ở phiên bản 5. Mặc dù thực tế là nó không được viết bởi bất kỳ tác giả hoặc cộng tác viên ban đầu nào, nhưng nó vẫn tuân theo cùng một chiến lược cơ bản. Nó được viết bằng PyTorch, đó là một điểm cộng. Trong phiên bản này của Yolo mosaic, tăng cường được sử dụng, và các phương pháp tiếp cận mở rộng quy mô khác nhau cung cấp nhiều cải tiến.
Để thiết bị phát hiện đối tượng của bạn hoạt động, trước tiên bạn phải thu thập các bức ảnh huấn luyện. Bạn nên suy nghĩ cẩn thận về hoạt động bạn đang cố gắng hoàn thành và lên kế hoạch trước cho các thành phần của nhiệm vụ mà mô hình của bạn có thể cảm thấy khó khăn. Để cải thiện độ chính xác của mô hình cuối cùng của bạn, tôi khuyên bạn nên giảm miền mà mô hình của bạn phải xử lý càng nhiều càng tốt.
Đối với đào tạo tùy chỉnh YOLOv7, chúng tôi cần phát triển tập dữ liệu. Nếu bạn không có bất kỳ dữ liệu nào, bạn có thể sử dụng cơ sở dữ liệu openimages .
Sử dụng LabelImg hoặc bất kỳ công cụ chú thích nào để chú thích tập dữ liệu. Tạo một tệp có cùng tên với hình ảnh và văn bản chú thích.
Chuẩn bị một bộ, ví dụ, tương ứng với
YOLOv7 chấp nhận dữ liệu nhãn trong tệp văn bản (.txt) ở định dạng sau:
Sau khi bạn đã gắn thẻ dữ liệu của mình, chúng tôi sẽ chia dữ liệu đó thành các thư mục đào tạo và kiểm tra. Tỷ lệ phân chia sẽ do người dùng xác định, tuy nhiên tỷ lệ phân chia phổ biến nhất là (80-20) phần trăm, có nghĩa là 80 phần trăm dữ liệu được sử dụng để đào tạo và 20 phần trăm để thử nghiệm. * Hình ảnh và nhãn được lưu trữ trong cấu trúc thư mục đã nêu.
* Để phân chia dữ liệu, hãy xem thư viện python - Thư mục phân tách, sẽ phân chia ngẫu nhiên dữ liệu của bạn thành huấn luyện, kiểm tra và xác thực.
Lệnh pip sau để cài đặt thư viện tách dữ liệu
pip cài đặt các thư mục chia nhỏ
Thư mục đầu vào phải được định dạng như sau:
Để cung cấp cho bạn điều này:
Tách các tệp thành một tập huấn luyện và một tập xác thực (và tùy chọn là một tập thử nghiệm). Thư mục tập dữ liệu cuối cùng trông giống như bên dưới trước khi tham gia khóa đào tạo YOLOv7,
├── yolov7
## └── đào tạo
#### └── hình ảnh (thư mục bao gồm tất cả hình ảnh đào tạo)
#### └── nhãn (thư mục bao gồm tất cả nhãn đào tạo)
## └── kiểm tra
#### └── hình ảnh (thư mục bao gồm tất cả các hình ảnh thử nghiệm)
#### └── nhãn (thư mục bao gồm tất cả các nhãn thử nghiệm)
## └── hợp lệ
#### └── hình ảnh (thư mục bao gồm tất cả các hình ảnh hợp lệ)
### # └── nhãn (thư mục bao gồm tất cả các nhãn hợp lệ)
Bây giờ chúng ta phải phát triển một tệp cấu hình tùy chỉnh. (Đảm bảo chỉ định thư mục thích hợp), vì quá trình đào tạo sẽ hoàn toàn phụ thuộc vào tệp đó.
Tạo một tệp với tên “custom.yaml” trong thư mục (yolov7 / data). Trong tệp đó, hãy dán mã bên dưới. Đặt đường dẫn chính xác đến thư mục tập dữ liệu, thay đổi số lượng lớp và tên của chúng, sau đó lưu nó.
Tạo một tệp chỉ định cấu hình đào tạo. Trong tệp custom.yaml , hãy viết như sau:
train: (Complete path to dataset train folder)
test: (Complete path to dataset test folder)
valid: (Complete path to dataset valid folder)
#Classes
nc: 1 # replace classes count
#classes names
#replace all class names list with your custom classes
namesnames: ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear']
Sau khi tất cả các thủ tục tiền xử lý đã được hoàn thành, nó đã sẵn sàng để bắt đầu đào tạo. Khởi chạy thiết bị đầu cuối trong “ yolov7 ” chính, kích hoạt môi trường ảo và thực hiện các lệnh được liệt kê bên dưới.
git clone https://github.com/WongKinYiu/yolov7.git # clone
cd yolov7
pip install -r requirements.txt # install modules
wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt # download pretrained weight
Kho lưu trữ chính thức chứa các trọng số được đào tạo trước cho mô hình của bạn.
Lưu ý :
Tùy thuộc vào ngữ cảnh sử dụng, chẳng hạn như Google Colab, bộ nhớ GPU có thể không đủ. Bạn có thể tìm hiểu bằng cách giảm kích thước lô trong tình huống đó.
python train.py --weights yolov7.pt --data "data/custom.yaml" --workers 4 --batch-size 4 --img 416 --cfg cfg/training/yolov7.yaml --name yolov7 --hyp data/hyp.scratch.p5.yaml
- img = kích thước của hình ảnh mà mô hình sẽ đào tạo, giá trị mặc định là 640.
- batch-size = kích thước lô được sử dụng để đào tạo tập dữ liệu tùy chỉnh.
- epochs = số kỷ nguyên đào tạo để có được mô hình tốt nhất
- data = đường dẫn tệp cấu hình tùy chỉnh
- weights = weights yolov7 được xử lý trước ( yolov7.pt )
Lưu ý : Nếu bất kỳ hình ảnh nào bị hỏng, quá trình đào tạo sẽ không bắt đầu. Nếu bất kỳ tệp nhãn nào bị hỏng, quá trình đào tạo sẽ không bắt đầu vì yolov7 sẽ bỏ qua tệp hình ảnh và nhãn đó.
Chờ cho quá trình tập luyện kết thúc trước khi thực hiện suy luận với các quả tạ mới hình thành. Các quả cân được đào tạo tùy chỉnh sẽ được lưu trong đường dẫn thư mục sau.
[yolov7/runs/train/yolov7/weights/best.pt]
Khi quá trình đào tạo kết thúc, hãy chuyển đến thiết bị đầu cuối và thực hiện lệnh được liệt kê bên dưới để phát hiện về trọng lượng tùy chỉnh.
python detect.py --weights runs/train/yolov7/weights/best.pt --source "path to your testing image"
Bạn có thể sử dụng YOLO để thiết kế mô hình phát hiện tùy chỉnh của riêng mình cho bất kỳ thứ gì bạn muốn.
Yolo v7 là một bước tiến đáng kể về tốc độ và độ chính xác, đồng thời nó phù hợp hoặc thậm chí vượt trội so với các mô hình dựa trên RPN. Mô hình này nhanh chóng và đáng tin cậy, và bây giờ nó có thể được sử dụng cho mọi thứ.
Đó là tất cả những gì cần có để “Đào tạo YOLOv7 về Dữ liệu tùy chỉnh”. Bạn có thể thử nghiệm với dữ liệu của riêng mình. YOLOv7 nhẹ và dễ sử dụng. YOLO v7 đào tạo nhanh chóng, đưa ra kết luận tốt và hoạt động tốt.
Các bài học chính từ YOLOV7 nói trên được tóm tắt như sau:
1659654540
コンピューター ビジョンで最も一般的な仕事の 1 つは、オブジェクト検出です。シーンを理解し、対話するための基盤です。
オブジェクト検出は、アイテムの検出などの単純なアプリケーションから自動運転車のような複雑なジョブまで、さまざまなシナリオを理解し、それらに基づいて判断するために使用されます。防犯カメラや現在の携帯電話でさえ、さまざまな機能のために同様の機能が組み込まれています。
現在、YOLO (You Only Look Once) はより優れたオブジェクト検出モデル フレームワークであり、このモデルは YOLO モデル ファミリーに追加された最新のモデルです。YOLO は、バウンディング ボックス予測とオブジェクト分類を単一のエンドツーエンドの微分可能なネットワークに組み込んだ最初のオブジェクト検出モデルです。これは、Darknet フレームワークの下で作成され、維持されています。YOLOv5 は、PyTorch フレームワークで記述された最初の YOLO モデルであり、はるかに軽量で使いやすいです。ただし、YOLOv7 は、YOLOv6 のネットワークに根本的なアーキテクチャ上の改善を加えていないため、標準的なベンチマークである COCO データセットで YOLOv6 より優れているわけではありません。
Yolo v6 には、小さなアイテムでのパフォーマンスの低下や、オブジェクトの次元が等しくない場合の一般化の悪さなど、いくつかの欠陥があります。
これは、YOLO 操作を示す元の YOLO ペーパーの写真です。それから長い道のりを経て、現在はバージョン 5 です。オリジナルの作者や貢献者によって書かれたものではありませんが、基本的な戦略は同じです。それはプラスである PyTorch で書かれています。このバージョンの Yolo モザイクでは、拡張が使用されており、拡張とさまざまなスケーリングのアプローチにより、多数の拡張機能が提供されています。
オブジェクト検出器を起動して実行するには、まずトレーニング用の写真を収集する必要があります。完了しようとしているアクティビティについて慎重に検討し、モデルが難しいと思われるタスクのコンポーネントについて事前に計画する必要があります。最終的なモデルの精度を向上させるために、モデルが処理しなければならないドメインを可能な限り減らすことをお勧めします。
YOLOv7 カスタム トレーニングでは、データセットを開発する必要があります。データがない場合は、openimagesデータベースを使用できます。
データセットに注釈を付けるには、 LabelImgまたは任意の注釈ツールを使用します。画像と注釈テキストと同じ名前のファイルを作成します。
に対応するセットを用意します。
YOLOv7 は、次の形式のテキスト (.txt) ファイルのラベル データを受け入れます。
データにタグを付けると、トレーニング フォルダーとテスト フォルダーに分割されます。分割比率はユーザーが決定しますが、最も一般的な分割は (80-20) パーセントです。これは、データの 80 パーセントがトレーニングに使用され、20 パーセントがテストに使用されることを意味します。*画像とラベルは、記載されているフォルダー アーキテクチャに保存されます。
*データ分割については、データをトレーニング、テスト、および検証にランダムに分割する Python ライブラリ – Split Folder を参照してください。
データ分割ライブラリをインストールする次の pip コマンド
pip install 分割フォルダー
入力フォルダーは次のようにフォーマットする必要があります。
これを提供するために:
ファイルをトレーニング セットと検証セット (およびオプションでテスト セット) に分けます。YOLOv7 トレーニングに入る前の最終的なデータセット フォルダーは次のようになります。
├── yolov7
## └── train
####└── images (すべてのトレーニング画像を含むフォルダー)
####└── labels (すべてのトレーニング ラベルを含むフォルダー)
## └── test
#### └── images (すべてのテスト画像を含むフォルダー)
####└── labels (すべてのテスト ラベルを含むフォルダー)
## └── valid
####└── images (すべての有効な画像を含むフォルダー)
### #└──ラベル (すべての有効なラベルを含むフォルダー)
ここで、カスタマイズされた構成ファイルを開発する必要があります。(必ず適切なディレクトリを指定してください)、トレーニング プロセスはそのファイルに完全に依存するためです。
(yolov7/data) フォルダーに「custom.yaml」という名前のファイルを作成します。そのファイルに、以下のコードを貼り付けます。データセット フォルダーへの正しいパスを設定し、クラスの数と名前を変更して保存します。
トレーニング構成を指定するファイルを作成します。custom.yaml ファイルに、次のように記述します。
train: (Complete path to dataset train folder)
test: (Complete path to dataset test folder)
valid: (Complete path to dataset valid folder)
#Classes
nc: 1 # replace classes count
#classes names
#replace all class names list with your custom classes
namesnames: ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear']
すべての前処理手順が完了すると、トレーニングを開始する準備が整います。メインの「yolov7」でターミナルを起動し、仮想環境を有効にして、以下のコマンドを実行します。
git clone https://github.com/WongKinYiu/yolov7.git # clone
cd yolov7
pip install -r requirements.txt # install modules
wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt # download pretrained weight
公式リポジトリには、モデルの事前トレーニング済みの重みが含まれています。
備考:
Google Colab などの使用状況によっては、GPU メモリが不足する場合があります。そのような状況では、バッチ サイズを小さくすることで学習できます。
python train.py --weights yolov7.pt --data "data/custom.yaml" --workers 4 --batch-size 4 --img 416 --cfg cfg/training/yolov7.yaml --name yolov7 --hyp data/hyp.scratch.p5.yaml
— img = モデルがトレーニングする画像のサイズ。デフォルト値は 640 です。
— batch-size = カスタム データセット トレーニングに使用されるバッチ サイズ。
— エポック= 最適なモデルを取得するためのトレーニング エポックの数
— データ= カスタム構成ファイルのパス
— 重み= 事前トレーニング済みの yolov7 の重み ( yolov7.pt )
注: イメージが破損している場合、トレーニングは開始されません。いずれかのラベル ファイルが破損している場合、yolov7 はその画像とラベル ファイルを無視するため、トレーニングは開始されません。
新たに形成された重みで推論を実行する前に、トレーニングが終了するのを待ちます。カスタム トレーニングされた重みは、次のフォルダー パスに保存されます。
[yolov7/runs/train/yolov7/weights/best.pt]
トレーニングが終了したら、ターミナルに移動し、以下にリストされているコマンドを実行して、カスタム ウェイトを検出します。
python detect.py --weights runs/train/yolov7/weights/best.pt --source "path to your testing image"
YOLO を使用して、必要に応じて独自のカスタム検出モデルを設計できます。
Yolo v7 は、速度と精度の点で大幅に進歩しており、RPN ベースのモデルと同等またはそれを上回っています。このモデルは高速で信頼性が高く、何にでも使用できるようになりました。
「カスタム データで YOLOv7 をトレーニングする」は以上です。独自のデータで実験できます。YOLOv7 は軽量で使いやすいです。YOLO v7 は迅速にトレーニングし、適切な結論を出し、優れたパフォーマンスを発揮します。
前述の YOLOV7 からの重要な教訓は、次のように要約されます。