Catalina Goanta1Catalina Goanta is the Associate Professor in Private Law and Technology at Utrecht University and the Principal Investigator of HUMANads, a Starting Grant funded by the European Research Council. She also is one of the editors of the Journal of European Consumer and Market Law and the main legal expert for the consortium organizing the activities of the European Commission’s E-Enforcement Academy. Catalina would like to thank the participants of the book-launching event hosted by the University of Chicago for their valuable comments.
* * *
A part of the series, Personalized Law.
As society becomes more measurable, our reliance on unmeasurable legal rules has been brought into question. Why have we used uniform legal rules so far? One answer can be that in accomplishing the purpose of law—namely, to realize justice—uniformity has been embraced as a way for legal systems to deal with this goal in a feasible fashion. Before the era of quantifying human behavior in cyberspace, mapping populations and gathering psychometric data about households and individuals was practically impossible because of price-prohibitive standards of data collection. As our lives increasingly move online, that is no longer the case. Profiling practices feeding off the digital footprint of citizens and consumers have created opaque markets and attracted new types of risks and harms, such as situational digital vulnerability and digital asymmetry. Yet what if profiling can be used for the good of remedying asymmetries rather than creating or expanding them? In their thought-provoking book, Personalized Law: Different Rules for Different People, Professors Omri Ben-Shahar and Ariel Porat address the universe of possibilities facilitated by big data and machine learning innovations with respect to personalized law. In doing so, they challenge one of the fundamental legal assumptions of our time—that uniformity is a desirable feature of justice.
In this short Essay, I trace the personalized law concept through the history and present of one of the institutions through which law has traditionally attained personalization—the open-ended norm of good faith. I argue that personalized law could be a modern application of this norm. Good faith is a notion known to legal systems for centuries; from this perspective, it is a relic pertaining to the ancient world. But good faith may be embedded in the legal technology of the future—an ancient alien.
Part I is dedicated to a discussion of good faith as it emerged in classical Roman law. In Part II, I set the scene for a wider discussion that is necessary on the current role of good faith in the contract law of cyberspace. Lastly, in Part III, I look at the relationship between good faith and personalized consumer law, and I address two main challenges this latter concept ought to overcome in order to become a feasible and desirable direction for lawmaking in the next decades.
I. Different Rules for Different Romans
When general rules attract an unjust result in a given circumstance, “good faith may provide the basis for an exception on the facts of that particular case.” As continental jurisdictions were undergoing a process of codification in the nineteenth century, private law started to develop categories and labels to attain systematization, which continued to be applicable to the modern societies of the next two centuries. More concretely, legal categorization has been a means for achieving legal certainty; by putting legal facts into boxes, similar facts can be treated similarly in the application of the law. As an open norm—“the content of which cannot be established in an abstract way but which depends on the circumstances of the case in which it must be applied, and which must be established through concretisation”—good faith provided the flexibility needed to break the mold of predetermined legal boxes to achieve just outcomes.
Justice and fairness have long been seen as concepts emanating from egalitarianism. In the ancient world, codification—here understood more as the writing down of laws—has been said to nurture egalitarianism. In the words of the Greek tragedian Euripides, “when the laws have been written down, both the weak and the rich have equal justice.” Roman law, however, reveals another facet of justice through the concept of bona fides, “one of the most fertile agents in the development of Roman contract law,” contributing to flexibility, convenience, and informality. During the Republic, bona fides developed as a procedural instrument to protect the reasonable expectations of parties to a contract. For instance, bona fides facilitated the emergence of procedural exceptions such as exceptio doli or exceptio metus to requests of performance by those who had secured a contract through fraud or duress, respectively. At a highly formalistic and rigid time in Roman legal history, when there was no Roman law of contract but a law of contracts, bona fides enabled just outcomes by applying flexibility on a case-by-case basis. The formulae used to introduce bona fides in procedures were closely linked to the activity of the Roman magistrate (praetor) governing the codification of rules.
This early transformative effect of good faith was adopted by German legal theorist Franz Wieacker in his own interpretation of the German provision on good faith (German Civil Code § 242), further building on the role of ius praetorium (Pretorian law) as defined by Papinian: “Ius praetorium est, quod praetores introduxerunt adiuvandi vel supplendi vel corrigendi iuris civilis gratia propter utilitatem publicam.” Pretorian law was a category of rules or principles that praetors could issue yearly in the form of edicts, which served the purpose of interpreting, supplementing, and correcting applicable civil law for the sake of public interest. These three functions have also served further personalization. An example is the following praetorian rule: “Whatever is said to have been transacted with a person less than twenty-five years old, I shall consider each case on the basis of its particulars.”
Although placed in a society faced with considerably less complexity, the Roman version of good faith is a staunch reminder that legal thought has been oscillating for centuries, if not millennia, between two axes—the macro and the micro—zooming out and establishing order at scale but sacrificing some individual interests or zooming in and dealing with the vastness of human diversity but sacrificing stability and consistency.
II. Good Faith in Contractual Cyberspace: How Personalized Law Could Work
After a brief journey into Roman law, it is now time to return to the present and understand what role good faith currently performs and could perform in a world run through big data. While good faith has been a popular subject of research, often dominating thematic scholarship (such as that on European private law), more recent debates relating to contracts in cyberspace are very stingy with their references to this concept.
But what exactly is good faith in contemporary legal doctrine and case law? In the landmark U.S. case Market Street Associates Limited Partnership v. Frey (7th Cir. 1991), the court held that “the duty of honesty, of good faith even expansively conceived, is not a duty of candor.” According to Judge Richard Posner, good faith “is a stab at approximating the terms the parties would have negotiated had they foreseen the circumstances that have given rise to their dispute.” In other words, good faith can act as a way to interpret and align the expectations of the parties but does not fundamentally change the nature of these expectations, such as by shifting the focus from parties’ individual interests in a contract to the goal of solidarity with their contracting partners. In this light, according to Markovits, good faith remains a “pedestrian ideal” surrounding the actual contract concluded by the parties rather than an ideal contract of which they should have thought.
So how does this interpretation of good faith fare in personalization-friendly digital economies? Let us reflect on an example with multiple layers of relevance—a contract between a consumer and a social media platform. And let us discuss it from the perspective of current interpretations as well as normative views on the personalizing role of good faith.
First, such an example is relevant because social media platforms have been some of the main players in profiling the social, cultural, and economic characteristics of their users, all the while establishing secondary markets for the monetization of resulting data. So far, even defining the nature of contracts between consumers and social media platforms has proven to be a challenging enterprise. This difficulty is mainly triggered by the seemingly unilateral distribution of performance (for example, the platform binds itself to offer a free digital service or digital content; the user binds herself to follow conduct standards) as well as by the network embedding of individual contracts. To make matters even more complicated, as contracts subject to a continuous change in performance take place in an economic environment where business models change at the speed of light—and where consumer literacy fluctuates heavily—a plethora of questions arise relating to the intention of the parties. The Facebook of ten years ago, primarily interested in platform advertising, has been growing its business models in the direction of social commerce and content monetization. It does not even have the same name. So what is the pedestrian ideal in this context? What does a regular user expect when making an account? What level of comprehension does she have relating to Facebook’s (or Meta’s) contractual interests, when the latter pertain to a highly opaque and sophisticated supply chain of data brokerage?
While legal scholarship has put a lot of emphasis on the role of platforms as information fiduciaries, this view conflicts with the contractual implications and practices of fiduciaries, as well as the meaning of good faith in this landscape. In Frey, Judge Posner stated that “the particular confusion to which the vaguely moralistic overtones of ‘good faith’ give rise is the belief that every contract establishes a fiduciary relationship.” That is because a fiduciary must treat a principal according to the same standards she would treat herself—and, in doing so, avoid abusing the principal. The resulting duty is one of utmost good faith, whereas “simple” good faith is considered to exist halfway between fiduciaries and a duty to refrain from fraudulent behavior. In this space, parties are still free (and expected) to pursue their own interests in the contract. So what can good faith contribute with in this situation? If nothing else, it can be used as a tool to interpret the intentions of the parties at the time when the contract was formed—as well as further developments—to better understand their expectations.
To a certain extent, the intentions of social media companies can be extrapolated from the legalese used in the general terms and conditions, and some intentions of users can be inferred from the use of platform affordances, which are available to them by virtue of the services provided by the platform. Building on this understanding, further reflections on what good faith currently means in digital contracts, such as those with social media platforms, can thus enrich legal doctrine and help it supplement existing legal standards that may not have been designed for digital economies. The personalization of contractual interactions and obligations using current images of good faith can be further achieved using what Hesselink refers to in German law as concretization—namely the creation of different good-faith profiles that may not be individually personalized but could still offer sufficient flexibility to additionally personalize the scope of obligations as a result of expectations.
Second, a social media contract is interesting because the same profiling-based business models that make its legal nature volatile and difficult to grasp can also be used to better define the intentions of the parties. However, a few caveats are in order. On the one hand, since social media companies will operate to pursue their commercial interests, they cannot perform neutral activities in the public interest. On the other hand, users are diverse, they have diverse levels of information literacy, and their usage of social media depends on a lot of other factors as well. It might be an interesting thought exercise to consider that a neutral third party (the state, an NGO, or other entities who can perform auditing activities) could have access to platform data over individual usage patterns, as well as information relating to how social media platforms monetize user data, in order to ascertain what the intention of the parties are. This normative suggestion is based on the assumption that privacy-friendly solutions for data analysis, such as federated learning, can be used to minimize abuses to privacy rights. This could shape the content of a good-faith obligation between parties, which would subsequently depend on the individual characteristics of a user, and it could also shape the specific application of social media platforms’ business models to certain demographics.
III. Why Personalized Law May Not Work
So far, the narrative of this Essay has focused primarily on the theoretical dimensions of good faith as an ancient alien that continues to perform personalization functions on modern markets. This part complements those arguments with a more practical understanding of what I find to be the two main limitations of personalized law and what may be necessary to overcome them.
The first limitation is that personalization can be scary. Personalization based on big data entails profiling. As I mentioned earlier in this Essay, a relevant question is certainly whether profiling can also be used for good—yet it is still profiling. The scary part of this activity may very well have to do with the dishonest practices shaping an opaque data market so complex that industries, regulators, and individuals alike find it difficult, if not practically impossible, to grasp. Unfair commercial practices taking place in the shadow of legal and factual uncertainty have raised serious concerns from data and consumer protection experts alike, who generally converge on the need for more top-down regulation with teeth in the form of enforcement powers. However, the further proliferation of the practices and technologies behind profiling (for example, data collection and machine learning) is inevitable right now. While facing backlash primarily for the use of facial recognition, public administrations around the world have been using automated decision-making systems with socioeconomic implications for citizens. Professor Virginia Eubanks’s solution in dealing with the amplification of inequality resulting from the use of such automation is to factor in positive discrimination, but more protection may be necessary to rectify harms done because of profiling based on skewed data or poorly performing algorithmic products. This is one solution, but if digital monitoring and enforcement are the future of law in practice, we need to develop many more alternatives to how this can be done in a legitimate way that supports individual rights rather than further suppressing them.
The second limitation is that personalization is messy. Governments around the world apply very different standards in how they make laws. Looking at public policy can reveal that, even in the twenty-first century, legal systems have yet to find an optimal way to measure and govern the development and deployment of new laws due to factors such as lack of infrastructure, funding, and cohesive policies. Personalized law is such a heavily computational endeavor that it cannot exist outside of physical infrastructures, and it requires multidisciplinary expertise. Given its practical dimension, personalized law equally cannot exist in the absence of comprehensive procedures that facilitate its purpose and ensure transparency and accountability in an attempt to improve the respect for digital footprints, as opposed to causing more harms to the individual behind them. Thus, creating institutional procedural-fairness safeguards can take into account the borderless nature of cyberspace and the division of the modern world. Such safeguards could be a step towards making personalization less problematic in practice.
Against this background, good faith—at least in contract law—can act as a binding agent. It is something old, but it can be something new. We have known this concept since antiquity, and it has proven to embed moral reflections that are as relevant now as they were hundreds of years ago. Presently, good faith can guide us in better understanding what defines the new digital relationships positive law might be underequipped to grasp and slow to solve. Whether personalized law will sweep regulators off their feet remains to be seen. Yet it is a certainty that Ben-Shahar and Porat are asking questions in their book that go to the heart of the architectures on which our current legal systems are built. What better time to structurally redefine the role and identity of law in society than the dawn of exponential complexity?
* * *
Catalina Goanta is the Associate Professor in Private Law and Technology at Utrecht University and the Principal Investigator of HUMANads, a Starting Grant funded by the European Research Council. She also is one of the editors of the Journal of European Consumer and Market Law and the main legal expert for the consortium organizing the activities of the European Commission’s E-Enforcement Academy. Catalina would like to thank the participants of the book-launching event hosted by the University of Chicago for their valuable comments.
* * *
Click here to return to the Personalized Law series main page.