The big news last week in FTC- and Data Security-land was the FTC’s loss in its enforcement action against LabMD. This decision, announced at the very end of the previous week – the afternoon of Friday the 13th, in fact – was a major loss for the FTC and a major win for consumers and small businesses: FTC Chief Administrative Law Judge Chappell roundly rejected the FTC’s data security case against LabMD, a small cancer detection lab effectively put out of business (0) as part of the commission’s imperious decade-long effort to establish itself as the nation’s chief cybersecurity regulator. The judge’s opinion calls into question the FTC’s underlying legal theory and enforcement-based approach to developing data security norms – an approach under which a majority of companies in the United States could be found guilty of violating the Section 5 of the FTC Act.
Over the past decade, the FTC has taken aggressive action against more than 50 firms relating to data breaches. Only two firms have had the wherewithal to challenge the FTC’s efforts against them: Wyndham and LabMD. As I discussed previously (1), the developments in the Wyndham case in August suggested serious problems with the FTC’s approach to regulating data-security practices. These concerns were borne out in Friday’s LabMD opinion (2), the first opinion to address the FTC’s approach on the merits. After seven years of litigation, Judge Chappell rejected the FTC’s evidence, suggested the case should have never been brought, rejected the FTC’s underlying legal theory, and, perhaps most important, suggested that the FTC’s approach to its data security cases is unconstitutional.
The LabMD case
The FTC’s case against LabMD concerns an employee’s use of a peer-to-peer filesharing application on her office computer. The employee installed this software in such a way that it exposed a file containing customer information. This file was discovered by Tiversa, a (pretty shady-seeming (3)) security consulting firm, which reported LabMD to the FTC after LabMD refused to hire Tiversa as a security consultant. The FTC sued LabMD, arguing that customer information was made available online as a result of LabMD’s alleged unreasonable security practices, and that such practices are “unfair” within the meaning of Section 5 of the FTC Act.
Section 5 prohibits, among other things, unfair acts and practices. In order for a practice to be “unfair,” Section 5(n) requires that it “causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition.” The judge focused on the first prong of this test, the requirement for actual or likely substantial injury to consumers, finding no creditable evidence of either actual or likely consumer harm.
The second part of this finding – no creditable evidence of likely consumer harm – is critical to both this and future FTC actions. A central aspect of the FTC’s legal theory is that it is predicated on the fact of unreasonable security practices – not on the fact of actual harm. Under this theory, the FTC’s basic argument is that unreasonable security practices are likely to result in the disclosure of sensitive consumer information, and that the disclosure of this information is likely to result in identity theft and other harms. As explained by the judge, the FTC argued that “Section 5 unfair conduct liability can be imposed based solely on the risk of a data breach and that proof of an actual data breach is not required.”
Judge Chappell had none of the FTC’s argument. “The term ‘likely’,” he tells us, “does not mean that something is merely possible. Instead, ‘likely’ means that it is probable that something will occur.” He bases this conclusion in part on available case law and prior FTC decisions. But he goes well beyond this, saying as well that “[i]f unfair conduct liability can be premised on ‘unreasonable’ data security alone, upon proof of a generalized, unspecified ‘risk’ of a future data breach, without regard to the probability of its occurrence, and without proof of actual or likely substantial consumer injury, then [the statutory standard provided in Section 5(n)] would not provide the required constitutional notice of what is prohibited.”
In other words, not only is the FTC’s theory of the case insufficient to meet the statutory requirements of Section 5, but if it were, then the FTC’s interpretation of Section 5 is unconstitutional: “Fundamental fairness dictates that proof of likely substantial consumer injury under Section 5(n) requires proof of something more than an unspecified and hypothetical ‘risk’ of future harm, as has been submitted in this case.”
What’s left of FTC regulation of data security?
The judge’s opinion cuts to the heart of the FTC’s data security efforts. Over the past 15 years, the FTC has sought to establish itself as the nation’s chief data security regulator. It has done so primarily through the use of enforcement actions – the use of its administrative investigation and enforcement power to take action against firms the FTC believes have unreasonable security practices. Almost all of these enforcement actions have resulted in settlements – a result driven largely by the high cost of challenging an administrative enforcement action, the asymmetric advantage that agencies have in pursuing such actions, and the potential reputational harms that can result from a protracted fight with a federal agency.
There are two key things to understand about the FTC’s approach to developing its data security regulations. First, it is based on the FTC’s Section 5 “unfairness” authority. There is no such thing as “data security authority” – the FTC emphatically does not have organic authority to regulate firms’ data security practices. Some practices may be unfair if they meet the requirements to bring an unfairness claim under Section 5(n). But the fact that some data security practices may be unfair doesn’t mean the FTC has generalized authority over data security practices – many bad data security practices may not be “unfair,” and most “unfair” practices have nothing to do with data security.
The second thing to understand is that the FTC’s approach to developing data security regulations is based on adjudicative enforcement actions. The commission refers to this as a “common law” approach to developing data security regulations. The underlying (though, as I explain elsewhere (4), flawed) idea is that the results of the commission’s various enforcement actions, be they settlements or adjudicated opinions, will over time provide guidance to regulated entities (that is, all businesses operating in the United States) as to good and bad data security practices.
Proponents of the FTC’s efforts lament Judge Chappell’s rejection of the commission’s efforts in the LabMD case. Chris Hoofnagle has probably offered the most comprehensive explanation (5) for why he finds Judge Chappell’s opinion problematic. His core argument is that the FTC Act’s focus is on proscribing bad practices, and that the need to tie a bad practice to harm limits the commission’s ability to take action to prevent such practices. As Hoofnagle says, “There is no point in having a FTC if it cannot act to prevent risky practices and if it can only act when common law formalities are met.”
But those “common law formalities” that Hoofnagle, along with other proponents of FTC power, laments are there for a reason: to ensure that the commission uses its broad statutory power in ways that comport with at least the bare minimum requirements of constitutional due process. Indeed, it seems curious to argue that an agency that has adopted a “common law” approach to regulation ought not to also be subject to “common law formalities.” This is a concern shared, or at least recognized, by every judge to consider the FTC’s data security efforts. Judge Chappell’s opinion explains that the FTC’s proffered approach “would not provide the required constitutional notice of what is prohibited.” In an earlier procedural opinion in the case, Judge Duffey of the Northern District of Georgia called FTC counsel “completely unreasonable” and criticized the FTC approach, saying the FTC “ought to give [regulated parties] some guidance as to what you do and do not expect, what is or is not required. You are a regulatory agency. I suspect you can do that.” As discussed in my previous post on the topic, in Wyndham the 6th Circuit judges – while allowing that bad security practices could constitute unfair practices – raised concerns about the constitutionality of the FTC’s approach, including a series of footnotes in their opinion suggesting that the FTC had not met constitutional notice requirements. Even Judge Salas, the District Court judge who first considered these concerns in the Wyndhan case, has found that these “statutory authority and fair-notice challenges confront this court with novel, complex statutory interpretation issues that give rise to a substantial ground for difference of opinion.”
It is easy to see why judges are cautious about the FTC’s claim to broad jurisdiction. Under the FTC’s theory, any firm that may experience a data breach is arguably engaged in an “unfair” practice, and any firm that actually experiences one is demonstrably engaged in such a practice. But it is likely that well more than half of the firms in the United States have experienced breaches that give outsiders access to information held by the firm. And it is well understood that there is no such thing as a “secure” system –any firm’s data could be breached by a motivated attacker. Under the FTC’s theory of data security, under which the possibility of a data breach demonstrates unfair security practices, at least half of the firms in the United States, and arguably every firm in the United States, could be the subject of an FTC enforcement action. All that protects any given firm from such an investigation is the whim of the FTC commissioners – or, worse and more likely, the whim of FTC staff.
To their credit, proponents of the FTC’s efforts do have an understandable concern: it makes no sense that, if two otherwise identical firms are engaging in identical and legitimately bad security practices, only the firm that has the misfortune of experiencing a data breach that results in consumer harm should be subject to an enforcement action. Both firms were doing the same bad thing, and it seems problematic that only the firm that experienced the misfortune of a breach can be sanctioned for those practices.
But while this seems problematic, it is not. It is, in fact, a tenet of due process. If a practice is actually bad – if it is so likely to result in data breaches and harm to consumers – then there should be examples of such breaches. The FTC can take action against those firms, or use them as examples to support agency rulemaking efforts, to identify conduct that is actually problematic. Firms engaging in similar practices, even if they have not (yet) experienced a data breach, can then learn from the consequences imposed upon firms engaged in practices that resulted in consumer harm. As is usually the case, decisions taken at the margin affect those made by inframarginal actors.
If, on the other hand, examples are so far and few between that the FTC cannot find cases in which consumers are actually harmed, that suggests that the commission is not addressing a substantial concern. We will never be able to prevent all data breaches. We should focus our attention instead on addressing those which we can avoid at reasonable cost. The “harm” requirement is one way to focus the commission’s efforts in this way. It is also a way to prevent commission overreach.
It must be remembered that Section 5(n), which implemented the “harm” requirement, was added to the FTC Act as a direct result of commission overreach. Hoofnagle points to (6) the extensive authority given to the FTC by the courts in the 1910s through the 1930s – but he omits from his history the period in the 1970s when the FTC was chastised by the Washington Post as the “national nanny.” Indeed, the commission was shut down by Congress for its abuses of the power that the courts had given it – the power that Hoofnagle applauds. Section 5(n) was a direct response to these abuses and was meant precisely to preclude the sort of overreaching approach to regulation that the FTC has taken to data security.
LabMD (and Wyndham) means the FTC should change gears
While Hoofnagle’s discussion of the history of FTC authority is flawed, it ends on an important and correct note: “The historical purpose of the FTC was to be preventative, cooperative, and not penal. What is needed here is correction of the practice and a swifter, non-punitive resolution.” Unfortunately, the commission’s entire approach to regulating data security practices has been overbearing, adversarial, and punitive. That is the very nature of the adjudicative approach; it is the very nature of the common law, and, lest we forget, the FTC overtly describes its approach to data security as “common law–like.”
I’m gratified to see Hoofnagle (and, in conversation with them, other long-time proponents of the FTC’s approach to data security) recognize that the FTC should adopt a cooperative and non-punitive approach to data security. Indeed, this is one of the central arguments in my own work on the FTC, such as my forthcoming Iowa Law Review article (7), in which I argue that “the agency’s efforts are largely problematic because it has proceeded with the mentality of an enforcement agency … it would be better advised to adopt the mentality of a rule-making agency. … And, to the extent that it is acting to develop legal norms, the FTC should expressly not seek damages, censure, or other punitive action against firms.”
Both LabMD and Wyndham make clear that the FTC’s enforcement-based approach to data security is problematic. Both cases recognize that there may be exceptional cases in which there is a clear potential for harm to consumers and in which a firm’s practices are “unfair” under the FTC Act. But both cases also call into question the FTC’s broader efforts to influence data security norms, raising serious questions about the sufficiency of these efforts to meet basic requirements of constitutional due process.
The better approach would be for the commission to take a step back and adopt the “preventative, cooperative, and not penal” role that Hoofnagle suggests. Good data security is hard; perfect data security is impossible. The commission should be a friend of firms trying to go from bad to better security, and from better security to good. It should not be their antagonist or enemy. The fact that it is an antagonist – the fact that it works with shady companies like Tiversa and destroys companies that are in the business of saving human lives and that its attorneys could ever count such an action under the “win” column – demonstrates that the commission is concerned more about cementing and growing its power than meaningfully improving the state of data security. Hopefully LabMD and Wyndham will give the commission pause and give it a moment to rethink its ill-conceived common law approach to data security.