Wiley  Mayer

Wiley Mayer

1603983600

For secure code, maintainability matters

Author Robert Collier said that “Success is the sum of small efforts repeated day in and day out.” That’s especially true when it comes to security. By now we all understand that securing your systems isn’t as simple as installing a firewall and calling it a day. Instead, it’s multiple actions and strategies in concert, implemented consistently over time. And believe it or not, one small but important strategy is simply writing code that’s reliable (bug-free) and maintainable (easy to understand). Yes, I know that sounds too simple, and possibly even self-serving. So in this post I’ll lay out some of the evidence for how writing reliable and maintainable code means you’re inherently writing more secure code.

Poor maintainability contributed to Heartbleed

To make the case for how maintainable code contributes to security, I’ll start with the Heartbleed Bug. Remember that one? It was a serious vulnerability in OpenSSL that allowed attackers to steal sensitive information with a really trivial attack that XKCD illustrates beautifully. David A. Wheeler teaches a graduate course in secure development at George Mason University. He wrote an extensive analysis of the vulnerability. In it, he laid part of the blame on the difficulty of simply understanding the code involved: “Many of the static techniques for countering Heartbleed-like defects, including manual review, were thwarted because the OpenSSL code is just too complex. Code that is security-sensitive needs to be ‘as simple as possible’.”

When the Heartbleed Bug was eventually found, it was actually detected by human review rather than static analysis. It’s worth noting explicitly here that the problem wasn’t caught in peer review, but long after the merge by independent security researchers. In his analysis, Wheeler discusses why Heartbleed wasn’t found sooner. “Little things like code formatting matter,” he says, “since badly-formatted code is much harder for humans to review.” Code Smell / Maintainability rules for things like code formatting and naming conventions are often dismissed as trivial, maybe because they’re about things so foundational that people take them for granted. As Wheeler points out, that doesn’t mean they’re not important.

Wheeler suggests that attention to maintainability leads to more secure software, and continues that “The goal should be code that is obviously right, as opposed to code that is so complicated that I can’t see any problems.” And of course, that’s what Code Smell rules help you do - write code that’s maintainable and easy to read so that it’s possible for it to be "obviously right".

Experts: Bugs and Code Smells are security ‘weaknesses’

Of course, Wheeler’s just one person, and opinions are like belly buttons, right? So let’s look at another source: the CWE, which makes the case for both maintainability and reliability as contributors to security.

I want to start with the 2020 CWE Top 25 Most Dangerous Software Weaknesses, which is an expert-sourced subset of the CWE. But first, some background: CWE stands for Common Weakness Enumeration. It’s a crowd- (of experts) sourced list of common software and hardware weaknesses that have “security ramifications”. It has about 1,300 entries, including quite a few that are used for categorization. The Top 25 is a list of “the most common and impactful issues experienced over the previous two calendar years.” Given this build-up it would be reasonable to assume that all 25 CWEs in the list describe security vulnerabilities. But by my count, nearly a third are bugs. Bugs that could lead to security breaches, but bugs nonetheless. For instance, lucky number 13 in the list is CWE-76, NULL Pointer Dereference.

In fact by one count, about 60% of CWEs aren’t vulnerabilities at all. CWE-699 is the Software Development view. It “organizes weaknesses around concepts that are frequently used or encountered in software development”. It contains 40 sub-categories, including  Complexity IssuesNumeric Errors and Bad Coding Practices. Of the 59 leaf listings under Bad Coding Practices, the first is the beautifully emblematic CWE-478, Missing Default Case in Switch Statement.

#security #coding #code-review #software-maintenance #software-development #secure-software-development #cpp #cpp-security

What is GEEK

Buddha Community

For secure code, maintainability matters
Wiley  Mayer

Wiley Mayer

1603983600

For secure code, maintainability matters

Author Robert Collier said that “Success is the sum of small efforts repeated day in and day out.” That’s especially true when it comes to security. By now we all understand that securing your systems isn’t as simple as installing a firewall and calling it a day. Instead, it’s multiple actions and strategies in concert, implemented consistently over time. And believe it or not, one small but important strategy is simply writing code that’s reliable (bug-free) and maintainable (easy to understand). Yes, I know that sounds too simple, and possibly even self-serving. So in this post I’ll lay out some of the evidence for how writing reliable and maintainable code means you’re inherently writing more secure code.

Poor maintainability contributed to Heartbleed

To make the case for how maintainable code contributes to security, I’ll start with the Heartbleed Bug. Remember that one? It was a serious vulnerability in OpenSSL that allowed attackers to steal sensitive information with a really trivial attack that XKCD illustrates beautifully. David A. Wheeler teaches a graduate course in secure development at George Mason University. He wrote an extensive analysis of the vulnerability. In it, he laid part of the blame on the difficulty of simply understanding the code involved: “Many of the static techniques for countering Heartbleed-like defects, including manual review, were thwarted because the OpenSSL code is just too complex. Code that is security-sensitive needs to be ‘as simple as possible’.”

When the Heartbleed Bug was eventually found, it was actually detected by human review rather than static analysis. It’s worth noting explicitly here that the problem wasn’t caught in peer review, but long after the merge by independent security researchers. In his analysis, Wheeler discusses why Heartbleed wasn’t found sooner. “Little things like code formatting matter,” he says, “since badly-formatted code is much harder for humans to review.” Code Smell / Maintainability rules for things like code formatting and naming conventions are often dismissed as trivial, maybe because they’re about things so foundational that people take them for granted. As Wheeler points out, that doesn’t mean they’re not important.

Wheeler suggests that attention to maintainability leads to more secure software, and continues that “The goal should be code that is obviously right, as opposed to code that is so complicated that I can’t see any problems.” And of course, that’s what Code Smell rules help you do - write code that’s maintainable and easy to read so that it’s possible for it to be "obviously right".

Experts: Bugs and Code Smells are security ‘weaknesses’

Of course, Wheeler’s just one person, and opinions are like belly buttons, right? So let’s look at another source: the CWE, which makes the case for both maintainability and reliability as contributors to security.

I want to start with the 2020 CWE Top 25 Most Dangerous Software Weaknesses, which is an expert-sourced subset of the CWE. But first, some background: CWE stands for Common Weakness Enumeration. It’s a crowd- (of experts) sourced list of common software and hardware weaknesses that have “security ramifications”. It has about 1,300 entries, including quite a few that are used for categorization. The Top 25 is a list of “the most common and impactful issues experienced over the previous two calendar years.” Given this build-up it would be reasonable to assume that all 25 CWEs in the list describe security vulnerabilities. But by my count, nearly a third are bugs. Bugs that could lead to security breaches, but bugs nonetheless. For instance, lucky number 13 in the list is CWE-76, NULL Pointer Dereference.

In fact by one count, about 60% of CWEs aren’t vulnerabilities at all. CWE-699 is the Software Development view. It “organizes weaknesses around concepts that are frequently used or encountered in software development”. It contains 40 sub-categories, including  Complexity IssuesNumeric Errors and Bad Coding Practices. Of the 59 leaf listings under Bad Coding Practices, the first is the beautifully emblematic CWE-478, Missing Default Case in Switch Statement.

#security #coding #code-review #software-maintenance #software-development #secure-software-development #cpp #cpp-security

Wilford  Pagac

Wilford Pagac

1596789120

Best Custom Web & Mobile App Development Company

Everything around us has become smart, like smart infrastructures, smart cities, autonomous vehicles, to name a few. The innovation of smart devices makes it possible to achieve these heights in science and technology. But, data is vulnerable, there is a risk of attack by cybercriminals. To get started, let’s know about IoT devices.

What are IoT devices?

The Internet Of Things(IoT) is a system that interrelates computer devices like sensors, software, and actuators, digital machines, etc. They are linked together with particular objects that work through the internet and transfer data over devices without humans interference.

Famous examples are Amazon Alexa, Apple SIRI, Interconnected baby monitors, video doorbells, and smart thermostats.

How could your IoT devices be vulnerable?

When technologies grow and evolve, risks are also on the high stakes. Ransomware attacks are on the continuous increase; securing data has become the top priority.

When you think your smart home won’t fudge a thing against cybercriminals, you should also know that they are vulnerable. When cybercriminals access our smart voice speakers like Amazon Alexa or Apple Siri, it becomes easy for them to steal your data.

Cybersecurity report 2020 says popular hacking forums expose 770 million email addresses and 21 million unique passwords, 620 million accounts have been compromised from 16 hacked websites.

The attacks are likely to increase every year. To help you secure your data of IoT devices, here are some best tips you can implement.

Tips to secure your IoT devices

1. Change Default Router Name

Your router has the default name of make and model. When we stick with the manufacturer name, attackers can quickly identify our make and model. So give the router name different from your addresses, without giving away personal information.

2. Know your connected network and connected devices

If your devices are connected to the internet, these connections are vulnerable to cyber attacks when your devices don’t have the proper security. Almost every web interface is equipped with multiple devices, so it’s hard to track the device. But, it’s crucial to stay aware of them.

3. Change default usernames and passwords

When we use the default usernames and passwords, it is attackable. Because the cybercriminals possibly know the default passwords come with IoT devices. So use strong passwords to access our IoT devices.

4. Manage strong, Unique passwords for your IoT devices and accounts

Use strong or unique passwords that are easily assumed, such as ‘123456’ or ‘password1234’ to protect your accounts. Give strong and complex passwords formed by combinations of alphabets, numeric, and not easily bypassed symbols.

Also, change passwords for multiple accounts and change them regularly to avoid attacks. We can also set several attempts to wrong passwords to set locking the account to safeguard from the hackers.

5. Do not use Public WI-FI Networks

Are you try to keep an eye on your IoT devices through your mobile devices in different locations. I recommend you not to use the public WI-FI network to access them. Because they are easily accessible through for everyone, you are still in a hurry to access, use VPN that gives them protection against cyber-attacks, giving them privacy and security features, for example, using Express VPN.

6. Establish firewalls to discover the vulnerabilities

There are software and firewalls like intrusion detection system/intrusion prevention system in the market. This will be useful to screen and analyze the wire traffic of a network. You can identify the security weakness by the firewall scanners within the network structure. Use these firewalls to get rid of unwanted security issues and vulnerabilities.

7. Reconfigure your device settings

Every smart device comes with the insecure default settings, and sometimes we are not able to change these default settings configurations. These conditions need to be assessed and need to reconfigure the default settings.

8. Authenticate the IoT applications

Nowadays, every smart app offers authentication to secure the accounts. There are many types of authentication methods like single-factor authentication, two-step authentication, and multi-factor authentication. Use any one of these to send a one time password (OTP) to verify the user who logs in the smart device to keep our accounts from falling into the wrong hands.

9. Update the device software up to date

Every smart device manufacturer releases updates to fix bugs in their software. These security patches help us to improve our protection of the device. Also, update the software on the smartphone, which we are used to monitoring the IoT devices to avoid vulnerabilities.

10. Track the smartphones and keep them safe

When we connect the smart home to the smartphone and control them via smartphone, you need to keep them safe. If you miss the phone almost, every personal information is at risk to the cybercriminals. But sometimes it happens by accident, makes sure that you can clear all the data remotely.

However, securing smart devices is essential in the world of data. There are still cybercriminals bypassing the securities. So make sure to do the safety measures to avoid our accounts falling out into the wrong hands. I hope these steps will help you all to secure your IoT devices.

If you have any, feel free to share them in the comments! I’d love to know them.

Are you looking for more? Subscribe to weekly newsletters that can help your stay updated IoT application developments.

#iot #enterprise iot security #how iot can be used to enhance security #how to improve iot security #how to protect iot devices from hackers #how to secure iot devices #iot security #iot security devices #iot security offerings #iot security technologies iot security plus #iot vulnerable devices #risk based iot security program

Houston  Sipes

Houston Sipes

1604102400

For Secure Code, Maintainability Matters

Author Robert Collier said that “Success is the sum of small efforts repeated day in and day out.” That’s especially true when it comes to security. By now we all understand that securing your systems isn’t as simple as installing a firewall and calling it a day. Instead, it’s multiple actions and strategies in concert, implemented consistently over time. And believe it or not, one small but important strategy is simply writing code that’s reliable (bug-free) and maintainable (easy to understand). Yes, I know that sounds too simple, and possibly even self-serving. So in this post, I’ll lay out some of the evidence for how writing reliable and maintainable code means you’re inherently writing more secure code.

Poor Maintainability Contributed To Heartbleed

To make the case for how maintainable code contributes to security, I’ll start with the Heartbleed Bug. Remember that one? It was a serious vulnerability in OpenSSL that allowed attackers to steal sensitive information with a really trivial attack that XKCD illustrates beautifully. David A. Wheeler teaches a graduate course in secure development at George Mason University. He wrote an extensive analysis of the vulnerability. In it, he laid part of the blame on the difficulty of simply understanding the code involved: “Many of the static techniques for countering Heartbleed-like defects, including manual review, were thwarted because the OpenSSL code is just too complex. Code that is security-sensitive needs to be ‘as simple as possible’.”

When the Heartbleed Bug was eventually found, it was actually detected by human review rather than static analysis. It’s worth noting explicitly here that the problem wasn’t caught in peer review, but long after the merge by independent security researchers. In his analysis, Wheeler discusses why Heartbleed wasn’t found sooner. “Little things like code formatting matter,” he says, “since badly-formatted code is much harder for humans to review.” Code Smell / Maintainability rules for things like code formatting and naming conventions are often dismissed as trivial, maybe because they’re about things so foundational that people take them for granted. As Wheeler points out, that doesn’t mean they’re not important.

Wheeler suggests that attention to maintainability leads to more secure software, and continues that “The goal should be code that is obviously right, as opposed to code that is so complicated that I can’t see any problems.” And of course, that’s what Code Smell rules help you do - write code that’s maintainable and easy to read so that it’s possible for it to be "obviously right".

Experts: Bugs and Code Smells Are Security ‘Weaknesses’

Of course, Wheeler’s just one person, and opinions are like belly buttons, right? So let’s look at another source: the CWE, which makes the case for both maintainability and reliability as contributors to security.

I want to start with the 2020 CWE Top 25 Most Dangerous Software Weaknesses, which is an expert-sourced subset of the CWE. But first, some background: CWE stands for Common Weakness Enumeration. It’s a crowd- (of experts) sourced list of common software and hardware weaknesses that have “security ramifications”. It has about 1,300 entries, including quite a few that are used for categorization. The Top 25 is a list of “the most common and impactful issues experienced over the previous two calendar years.” Given this build-up, it would be reasonable to assume that all 25 CWEs in the list describe security vulnerabilities. But by my count, nearly a third are bugs. Bugs that could lead to security breaches, but bugs nonetheless. For instance, lucky number 13 in the list is CWE-76, NULL Pointer Dereference.

In fact by one count, about 60% of CWEs aren’t vulnerabilities at all. CWE-699 is the Software Development view. It “organizes weaknesses around concepts that are frequently used or encountered in software development”. It contains 40 sub-categories, including Complexity IssuesNumeric Errors and Bad Coding Practices. Of the 59 leaf listings under Bad Coding Practices, the first is the beautifully emblematic CWE-478, Missing Default Case in Switch Statement.

This is not a rule most people see as important for Code Security. At SonarSource, we don’t even class it as a Bug, but as a Code Smell / Maintainability problem. But its inclusion in the CWE says that experts in the field see it as important for security. Because the small consistent efforts like providing default clauses help you write “code that is obviously right”.

#security #maintainability #code security

Alayna  Rippin

Alayna Rippin

1600531200

QR Codes Serve Up a Menu of Security Concerns

Quick Response (QR) codes are booming in popularity and hackers are flocking to exploit the trend. Worse, according to a new study, people are mostly ignorant to how QR codes can be easily abused to launch digital attacks.

The reason QR code use is skyrocketing is tied to more brick-and-mortar businesses are forgoing paper brochures, menus and leaflets that could accelerate the spread of COVID-19. Instead they are turning to QR codes as an alternative.

MobileIron warns that these QR codes can be malicious. In a study released Tuesday, the mobile device management firms found that 71 percent of survey respondents said they cannot distinguish between a legitimate and malicious QR code.

#cloud security #mobile security #most recent threatlists #web security #malicious qr #mobileiron #pandemic #qr code #security concerns #touchless menu #what qr codes can do

How To Set Up Two-Factor Authentication in cPanel

What is 2FA
Two-Factor Authentication (or 2FA as it often referred to) is an extra layer of security that is used to provide users an additional level of protection when securing access to an account.
Employing a 2FA mechanism is a vast improvement in security over the Singe-Factor Authentication method of simply employing a username and password. Using this method, accounts that have 2FA enabled, require the user to enter a one-time passcode that is generated by an external application. The 2FA passcode (usually a six-digit number) is required to be input into the passcode field before access is granted. The 2FA input is usually required directly after the username and password are entered by the client.

#tutorials #2fa #access #account security #authentication #authentication method #authentication token #cli #command line #cpanel #feature manager #google authenticator #one time password #otp #otp authentication #passcode #password #passwords #qr code #security #security code #security policy #security practices #single factor authentication #time-based one-time password #totp #two factor authentication #whm