Birth of a bug
What are software vulnerabilities (bugs) and where do they come from?
A couple of definitions are helpful for context:
Gartner® definition: “A bug is an unexpected problem with software or hardware. Typical problems are often the result of external interference with the program’s performance that was not anticipated by the developer.”
Mitre defines a vulnerability as “…a mistake in software that can be directly used by a hacker to gain access to a system or network.”
Bugs and vulnerabilities can occur for many reasons including new weaknesses, new and increasingly sophisticated attack methods, adoption of new technologies, infrastructure and development practices, talent shortages, etc. In many cases, the cause involves human error. And that means that software bugs are here to stay. So, let’s walk through the life of a bug and what you can do about them.
Fun Bug Fact:
The first computer bug was rumored to be a moth that shorted out Harvard’s Mark II computer in 1947. The computer was “debugged,” and the “bug” was taped to the logbook entry page kept by technicians in the lab. Talk about a bug report!
Finding bugs
How are software vulnerabilities found?
Most software bugs are found by security researchers that we affectionately refer to as bug “bounty hunters.” Bug bounties are programs run by software vendors (and external security firms) that often pay cash rewards to threat researchers and ethical hackers who discover and report bugs that vendors can replicate, and ultimately provide a “fix” for, often in the form of a patch or configuration change instructions.
Bugs can also be discovered by unethical hackers or “villains” with less honorable goals. And when these bad actors discover vulnerabilities, unlike our bounty hunting friends, these bad actors don’t report them to vendors, or anyone for that matter. They weaponize them for their own malicious use.
The Known Exploited Vulnerabilities Catalog lists known vulnerabilities that have been exploited for malicious intent when they become known, often as the result of investigating a breach or a series of breaches.
Some of the less honorable motivations to hack include:[1]
- Financial gain: Includes ransomware, direct theft from victims, selling data on the dark web, etc.
- Political statements: The “hackivist” is looking to disrupt websites, systems, etc. to support a specific ideology.
- Revenge: Disgruntled employees, unhappy customers or others who cause damage to address a personal vendetta.
- Fame: Hackers looking for recognition.
- Theft of intellectual property: The motivation for the theft of intellectual property is especially dangerous for corporations and other organizations because this category tends to include state- and corporate-sponsored attacks targeting everything from weaponry blueprints to product patents. State-sponsored hackers often have an unlimited amount of time and funding to find a way to hack into their target.
Mitre tracks an exhaustive list of threat groups. Examples of these nation-state actors are Advanced Persistent Threat (APT) groups[2] like China’s DeputyDog (APT17), Buckeye (APT3), and the terrifying Double Dragon (APT41)[3].
Gartner also expressed its security concerns by saying, “It is not possible for an organization, at any security maturity level, to achieve zero vulnerabilities in its environment.”[4]
Fun Bug Fact:
Many software vendors require a non-disclosure agreement (NDA) for participation in bug bounty programs to help prevent premature disclosure of vulnerabilities and discourage bad actors from weaponizing vulnerabilities before customers have a chance to retrieve, test, and apply patches.[5]
Software vendor acknowledgement
How vendors respond to bugs
Do you know what vendors are required to do when bugs are discovered in their software? Typically, nothing. Why not? Because the user agreement[6] that licensees sign usually protects the vendor with disclaimers of warranties and remedies. While that is disheartening, vendors do act. They usually:
- Review the bug submission
- Reproduce it (through bounty hunter instructions)
- Validate the bug, identify affected products, and accept the bug for further action
- Develop a patch
- Add the new patch to their update packages such as Oracle Critical Patch Updates (CPUs) or SAP Security Patch Day
This process can take time – ranging from weeks to years.
Fun Bug Fact:
NIST tracks the submission of new vulnerabilities in a national vulnerability database (NVD). In 2023, there were over 29,000 security vulnerabilities registered as common vulnerabilities and exposures (CVEs) and 2024 is on track to outpace 2023.[7]
Naming a bug
How common vulnerabilities and exposures (CVE) are identified
It helps if software bugs have names (well, numbers actually). Fortunately, CVE Numbering Authorities including researchers and software vendors[8] assign CVE IDs from pre-allocated reserved blocks of numbers provided by NIST.[9] They assign one CVE record per vulnerability submitted so that cybersecurity experts can track and report on software fixes and mitigations.
Note: the date the CVE was created reflects the date the number was first issued to a vendor — which can differ dramatically from the date the vulnerability was discovered![10]
Fun Bug Fact:
CVE lists are voluntary and incomplete by definition, because vendors have no obligation to report bugs in new or legacy software.[11] Additionally, while MITRE and NIST provide guidelines and standards for CVE submissions to ensure consistency, vendors are not obligated to follow them.
What you can do about software vulnerabilities
How to develop a security and risk management (SRM) response
If your reaction to learning about a new software bug is “Just great! Another bug! And another patch!” you aren’t alone.
However, that bug or vulnerability has likely been in the software since it was first written by the developer (unless it was introduced by a patch) and since bad actors can quickly exploit a bug, it’s smart to have a plan in place for bug defense.
Vendor patching is a common, yet laborious, defense with steps such as:
- Choose patches from Oracle CPUs or on SAP security patch day
- Bundle patches and test them on a non-production server
- Confirm that they do not affect critical data such as financial balances or health records
- Schedule downtime and take production systems offline for patching
- Repeat the entire process if the vendor issues new patches in response to user feedback to the original patches
Fun Bug Fact:
The average time for bugs to be exploited in the wild was only 4 days in 2024[12], down from more than 74 days in 2014?[13]
Addressing bugs
How to approach security patches
While staying current with vendor security patches has been a traditional way to mitigate vulnerabilities[14], it’s not uncommon for companies to struggle with patching[15] because it can require:
- Highly skilled IT resources[16] and extensive planning, testing, and system downtime[17]
- Long waits — months or even years — for software vulnerabilities to be detected, and patches to be provided, tested, and installed[18]
- Potentially expensive, low ROI software upgrades to continue to receive new vendor security patches on software releases that no longer receive software vendor security updates[19]
A strategy relying solely on patching is challenged with:
- Quickly mitigating all cybersecurity risk[20]
- Accounting for unique system configurations or data[21]
- Vendor patches are not specifically designed to protect custom code developed by a customer or address a customer’s unique environment and architecture
- Developing and applying vendor security updates may take an extended period of time to complete, delaying protection from known threats
Additionally, a considerable amount of downtime is required to patch and update systems. This can cause friction between security teams trying to deploy all available tactics to reduce risk and IT operations teams charged with uptime and business continuity.[22]
Fun Bug Fact:
No single solution can ensure you can find and fix all vulnerabilities.[23]
Outsmarting bugs
How Rimini Protect™ addresses software vulnerabilities
Many enterprise software systems have driven businesses for clients for decades and are now beyond their vendor support end-dates, implying that limited or no new security patches are available for these systems. Security patches are also not typically available when a client leaves vendor support.
Rimini Street enables a business and data-driven approach to prioritizing risk mitigation. We focus on improving security postures and reducing security risk exposure without the application of vendor-provided security patches or modification of vendor code.
Many organizations have already established other defensive strategies to protect their enterprise software as a result of limited security patch availability. Rimini Protect solutions are tailored to a client’s ecosystem to complement and enhance our clients’ existing security strategies to protect against known (discovered) and unknown (undiscovered) threats and vulnerabilities.
Software vulnerabilities FAQ
Answers to common questions about software vulnerabilities
What is a software weakness vs. a vulnerability?
Weaknesses are software or hardware conditions that can lead to a vulnerability. Mitre maintains a list weaknesses known as Common Weakness Enumerations (CWEs). A vulnerability is a “bug” or mistake that can directly be used to exploit (gain access to) a system. NIST maintains a national vulnerability database of known Common Vulnerability Enumerations (CVEs).
What are some examples of software vulnerabilities?
Think of vulnerabilities as a way to take advantage of the context (weakness) to exploit a system. With CWE-787 there is a partial list of many ways this weakness has been used to exploit systems. Examples of software vulnerabilities include corrupting memory (CVE-2021-28664), enabling the ability to “escape” from a test or sandbox environment (CVE-2020-0041), etc.
What are some examples of software weaknesses?
Think of software weaknesses as the context of the situation. For example, CWE-787 is titled “Out-of-bounds Write” which is described as writing data before or after the area of memory that the program is intended to access. Allowing data to be written in unexpected areas of memory can cause unpredictable results and is a weakness.
What is the difference between malware and software vulnerabilities?
Malware is a program that can be installed via an exploit intended to compromise the “confidentiality, integrity, or availability of the victim’s data, applications, or operating system.” Software vulnerabilities are “bugs” that can be used to exploit a system.
What is an exploit?
An exploit is a “bad actor” using a program or piece of code to take advantage of a security vulnerability. CISA keeps a list of known exploited vulnerabilities called the KEV catalog.
What are some examples of exploits?
Some examples of exploits include:
- An individual or organization using methods described in CVE-2021-28664 to write some code that could corrupt a company’s database.
- An individual or organization using methods described in CVE-2020-0041 to write code that enables unauthorized access to files on a computer to steal customer information or trade secrets.
Why are CWEs important?
Mitigating the overall weakness can usually render all of the CVEs mentioned within one CWE ineffective. This also protects against future vulnerabilities that rely on the same weakness (proactive protection). This is much more efficient than attempting to address each vulnerability one at a time.