Buying down risk: Cyber liability
In the US legal system, civil liability is a mechanism through which policymakers can encourage desirable behavior. A clear definition of liability shifts behavior by holding responsible parties accountable for product failures. Liability can take the form of either strict liability, where a responsible entity is liable if things go wrong regardless of its own actions, or of negligence, where an entity is only liable if it fails to meet a certain, clearly defined negligence standard or duty of care. The goal of clarifying liability is to hold firms accountable for known risks rather than unknown ones, as software flaws will always exist in products. Therefore, with respect to software products, a negligence framework is better than strict liability for shifting vendor behavior in the interest of the public, national security, and the digital ecosystem. In a negligence framework for software manufacturers, following secure development and vulnerability-management practices would comprise a negligence standard. Such a clear definition of liability would shift vendor behavior by holding parties accountable for failing to meet baseline expectations.
A clear legal-negligence standard for software vendors would improve security in the cyber ecosystem by incentivizing vendors to meet baseline security requirements for products and to provide more security support throughout product lifecycles. Clarifying liability would impose on final assemblers of software products—the entities responsible for placing a product in the consumer market, also called final goods assemblers—specific obligations for ensuring the security of all code incorporated in their final products, including open source packages. This would encourage further growth in the role played by bigger, well-resourced software vendors in improving the security of commonly used open source packages. Failure to meet those standards would either give harmed parties the ability to sue responsible vendors if lawmakers created a private right of action or, lacking that right, prompt punitive enforcement from a federal regulator. Without a clear negligence standard, applications of liability will remain patchwork, inconsistent, and opaque. This is the current state of software liability, despite continued calls for more significant financial incentives for more secure software development in the face of pressure to bring new features and services to market rapidly. The few suits brought against vendors have been settled out of court, preventing clear precedent and, crucially, avoiding clarity around a legal negligence standard.
A commonly touted argument against clarification of liability is that it might distort or replace a normal market function with undue policy action. In fact, liability would provide a leveling effect, addressing current information asymmetries that prevent consumers from making informed purchasing decisions and empowering them to identify and respond to negligence. A healthy liability regime for software will need to address concerns dealt with by similar frameworks, including the potential for a cap on damages, limits on scope for short-lived products, and the prospect of an added enforcement role for a federal regulator.
Building a liability framework requires addressing two key questions : (1) Which is the right entity to hold liable? (2) What are the standards of a duty of care to which that entity should be held, including the length of time it should apply for after a product has gone to market?
Which party bears responsibility?
Many product-liability norms focus on the party most responsible for selling a product to a user in a commercial setting. Software supply chains make regulation difficult—a huge portion of final products and services involve modules, components, and parts produced by third parties, which can obscure liability claims. The automotive industry model can provide a viable precedent for software product liability. Although many third parties build pieces of a car, the final goods assembler is the name-brand car company on whose design these parts are based or implemented. When the assembly of a final product affects the way a component or part performs, the entity that placed the product in the general marketplace typically assumes liability. If a part or component is deemed defective in and of itself—not because of its interactions with the whole product—liability then can fall on the parts manufacturer. For software, this final goods assembler should be a legal entity that produces, for use in the commercial or consumer context, a software product or service, thereby assuming a duty of care. For software, a duty of care that adequately protects smaller or nonprofit third parties (including open source developers), distributes authority between state and federal authorities, and addresses the common practice of licensing as opposed to outright selling software is critical. The final-goods-assembler model, instead of universal liability, is a useful policy framework to resolve and integrate these concerns.
Defining a duty of care
The second key consideration is defining a negligence standard for the responsible party—the metrics by which it fails to meet its duty of care. A negligence standard requires a plaintiff to prove negligence on the part of a manufacturer in order to seek damages. Plaintiffs must prove that a product was both defective at the time of sale and that the defect was an actual and proximate cause of damage to the buyer. A negligence standard for software development would need to strictly clarify three questions:
- What standard of development must a product meet before it hits the general market?
- What is the duty of the vendor to provide security support for a product after it hits the general market?
- How long should a vendor reasonably be expected to provide support to a product?
Regarding question one, NIST is hard at work to develop standards for the development and maintenance of “critical software” and has previously promulgated a best-practices framework—the Secure Software Development Framework—integrating multiple industry and international efforts, including BSA’s Framework for Secure Software, to map development processes against minimum security expectations. Questions two and three are related, but their answers must focus on delineating a minimum period for product support and defining minimum expectations for vulnerability management during that period. The time expectation should specify the anticipated end-of-life date for a product and include a general statement that the product must receive security support as long as the vendor supports it for other purposes, such as usability improvements or bug fixes. The minimum expectation for vulnerability management should require that vendors publish a vulnerability management program and offer remediation to known vulnerabilities in a reasonable amount of time after their disclosure to the public or the vendor.
Recommendations
Congress should pass a law clarifying a duty of care for final goods assemblers of software and a path that harmed parties can pursue, dictated by the creation of a private right of action or the empowerment of a singular, punitive federal regulator. A private right of action would give harmed parties, like those that a successful cyberattack damages, the right to sue final goods assemblers of software products for damages in courts. Although a competent body like the Federal Trade Commission (FTC) would set the negligence standard and could bring suits against vendors as well, a private right of action would widen the scope of who may bring a suit against a software vendor, providing another avenue for enforcement. Alternatively, lawmakers might decide not to create a private right of action. In this case, enforcement of the standard would fall solely on the FTC. Each structure has pros and cons. A private right of action offers victims of cyber incidents a more direct means of holding irresponsible parties accountable for damages but creates opportunities for unjustified litigation that could overburden software final goods assemblers with legal fees. Removing the opportunity for a private right of action alleviates this concern, which bottlenecks the process with a single, powerful federal regulator. The decision between these two models is beyond the scope of this series so both are addressed below:
WITHOUT a private right to action
- Establish and maintain a standard-of-care framework: NIST should build on existing software and critical software security efforts to establish a framework that outlines a standard of care for final goods assemblers. NIST should review and update the framework every two years. The framework should detail: which entity in the supply chain is most responsible for the security of a final product, what products should the standard of care cover or exclude, and the minimum viable expectations for the management and remediation of vulnerabilities in covered products.
- FTC regulation: The FTC should, in consultation with NIST, codify a standard of care in the Code of Federal Regulations (CFR). The FTC should review and update the regulation within 180 days of NIST issuing a new framework.
- FTC enforcement:The FTC should be tasked with enforcing the established standard of care under existing FTC authorities to deter or punish unfair or deceptive trade practices and lay the legal groundwork for a private right of action at some point in future.
WITH a private right to action
- Set a standard of care: Congress should codify a process in law whereby a competent body, most likely the FTC in close partnership with NIST, sets an enforceable standard of care for software final goods assemblers. The standard of care should revolve around the management of known vulnerabilities in products and the development process producing them, and the final goods assembler should be deemed to meet the standard of care if it distributes security patches for security vulnerabilities in a covered product to the public within ninety days of a vulnerability being disclosed through existing public databases, being reported to the final goods assembler by a third party, or being discovered by the final goods assembler.
- Establish vulnerability discovery, disclosure, and management requirements for final goods assemblers: Require that final goods assemblers establish a regularly updated security vulnerability disclosure policy posted publicly, a method for the submission of security vulnerabilities to the final goods assembler maintained on a public website, and an internal security vulnerability management program.
- The law should task the FTC with enforcing the vulnerability discovery, disclosure, and management requirements.
- The law should clarify that an entity that incurs harm from a covered product that does not meet the established standard of care may bring a private right of action against the final goods assembler to the US District Court with jurisdiction.
- Define the important terms:
- Covered product: Covered products should include any software intended for commercial or consumer use. However, this should not include any product created without mechanisms for monetization or any product that exceeds its stated end-of-life date and whose vendor no longer supports it for other purposes such as usability improvements or bug fixes.
- Covered vulnerability: Covered vulnerabilities should include all known vulnerabilities unless the vulnerability has not been disclosed through existing public databases or been assigned a common vulnerabilities and exposures number, has not been reported to the final goods assembler by a third party, or is unknown to the final goods assembler.
- Determination of harm: Qualifying harms should include demonstrable economic loss exceeding $75,000, including through any harm to, or arising from impairment of, the confidentiality, integrity, or availability of data, a program, a system, or information; physical damage or destruction; and physical harm to human safety, security, or loss of life.
- Limitations for damages: To protect firms from excessive damages, the law should place a limitation on the maximum dollar amount of damages sought by harmed parties as not exceeding 15 percent of the annual revenue from the preceding year of any entity found liable.