Once you understand algorithms, the US legal system starts to make more sense. Or maybe nonsense. In the neutral algorithmic terms of information flow and security, the US court system is being “hacked.” In a wide algorithmic sense, “hacking” is when a functioning system acts non-functional as a result of inputs going to the wrong place.
So, when I say that the US court system is being hacked, I mean that it is making rulings which are legally true, yet against the intent of the law. Too little common sense and too much technicality — legalism run amok — is tying the Law in knots. Informational concepts like data compression (reducing the bits needed to represent data) and legibility (or “legal cognizability,” the court’s authority to try a case without prior approval) explain how such strategies work, and can be stopped.
On January 5, I sat in a US Federal courtroom, hearing arguments in three cases which will determine the future of topics ranging from state violence to digital damage to children. These cases took place in the Ninth Circuit of the US Federal District Court, one of the highest courts in the US.
In the first case, Los Angeles Press Club v. Kristi Noem, the US Government argued that giving audible warning to a (peaceful) crowd before firing crowd-control weapons at people is burdensome, because audibility — making sure people can hear the warning and thus avoid injury — is “up to the whim of the crowd.” In the second case, an online company claimed that because their server sent an email, the court can presume the recipient was fully informed of (and thereby implicitly accepted) a term-of-use change depriving them of all US legal rights regarding the product, including its harms. Similarly, in the third case, an EdTech company claimed that because a child used their software at school, the software’s terms-of-use contract deprives the child’s family of all US legal rights regarding the product, including its harms.
All three cases demonstrate how the Law can be hacked. The culprits are compressed, non-human representations of human activities, such as contracts or disclosures, which assume for themselves the power of human judgement. But laws need context and live interpretation, which comes from humans and not just other laws. The Founding Fathers had good reason to insist that only human beings be the judges and jury.
Hacking the Law has a historical precedent
Charles Dickens introduced the concept of “legal ignorance” in his novel Bleak House back in 1852. When a bypassed character in Bleak House named Gridley plaintively begged the Lord High Chancellor to recognize his complaint, the Chancellor’s response was, “I am legally ignorant of your existence.” Unfortunately for Gridley, a previous legal step deciding who belonged in the estate in the first place had left Gridley out, and now he was forever prohibited from even stating his case. That kind of legal gambit operates like a trap door, hard to reverse once triggered, however stupid the result.
That great book described a particular legal arena in Victorian England called “Chancery,” a kind of probate court gone rogue. A chummy network of lawyers and chancellors (judges) would pay themselves out of the estates they were supposed to administer, often draining the money entirely. Chancery was a travesty of justice, a perfect example of self-funded administration run amok.
In a narrow historical lens, Chancery began around the year 1000 as a royal document-issuing office. As with most administrative overgrowth, by 1400 Chancery had expanded beyond mere document issuing to include providing “fair relief” to petitioners in court. Over a few hundred years, the office became so parasitic it was abolished in 1872, twenty years after Dickens’ writing. In a wide algorithmic lens, Chancery illustrated the mathematical process of “leading indicator dependency,” a concept that explains the tendency of any learning system to make use of quick rewards and ignore long-term costs.
By blurring historical detail, this wide algorithmic lens covers a lot. On the one hand, leading indicator dependency can explain how creatures as simple as bacteria can be tricked into self-destructive behavior. In a strict sense, the motor systems of those creatures have been hacked. On the other hand, leading indicator dependency can also explain how systems as complex as Chancery evolve to exploit and defend their resource streams.
Chancery gained traction by creating and enforcing ever more specific contracts and technicalities which overrode common sense. That is, by creating and enforcing various minute Letters of the Law, Chancery collectively overwrote the Intent of the Law.
Now the same thing is happening in the US.
Hacking beyond the computer
We can understand this new hacking in terms of old hacking. Hacking the Law and hacking computers are similar, because computers and laws have similar structures, rules and loopholes. For example, software is organized in hierarchies — minutia atop foundational meta-categories, subclasses atop superclasses — while the law similarly stacks local jurisdictions atop county, state and federal, all on top of English Common Law. To decide what information to pay attention to, computers use protocols, handshakes, private keys and so forth, while the law uses standing, jurisdiction, appellate process and such. To categorize information safety, computers use address space, kernel space, sandboxes and user space, while the law tracks decisions, reasoning and precedents.
In both cases, once a bad decision becomes a precedent, it can spawn similar decisions, perpetuating itself. We know that in computers, security holes allow viruses, worms, malware, kernel hacks, data breaches and countless other named and un-named ways to make the computer do what it shouldn’t. We should expect the Law to be similarly hackable.
Most importantly, software and the law share a common weakness: they’re both built on discrete categories, not the flowing real numbers of Nature. Nature has no sharp-edged borders anywhere. Made-up borders give categories, symbols and even logic an artificial certainty which doesn’t hold up in real life. For example, in a computer, a single-bit error crashes the core; in politics, a constitutional ambiguity can incite revolution. Nervous systems aren’t so brittle, being continuous in space and time to match the world they live in.
But even with the lubrication of natural bandwidth, Nature has hacking too. Hundreds of millions of years ago there were flying insects whose eyes saw specific colors, as well as plants whose pollen needed transport. To lure and reward the insects for transporting pollen, plants evolved special appendages and colors to tickle insects’ visual systems. We call those attention-grabbing innovations “flowers.”
In general, the colorful, aromatic attractants (flowers) hoisted up by plants benefit the insects they attract by providing edible pollen. But the coolest illustration comes from flowering carnivorous plants such as the venus flytrap, which both helps and hurts insects at once. Venus flytraps have a clutch of fatal fanged traps near the roots to capture prey, while higher up, on a very long stalk, a hospitable flower beckons other insects to visit and depart. Both catch and release. Two opposite flavors of sensory hacking.

In Nature, every kind of lure or camouflage — there are so many! — counts as an example of hacking. Humans are especially vulnerable to lures, especially when you consider how we hack ourselves by things we make and by things we like.
Refined sugar is the best chemical for tickling human taste buds, making us want to swallow and eat more. Now the world manufactures enough sugar for 20 kilograms per person per year, damaging health because of being yummy. Tobacco is the most efficient way to dose dopamine (via nicotine), the neurochemical driving habit-formation. Modifying motor habits explains addiction, enough to overwhelm the damage done to lungs. Opiates, chemical keys to pleasure receptors, drive even more fatal addictions. Bright colors and sparkly things attract our eyes, just as they do insect eyes. That could explain why every culture uses color everywhere, and why people become addicted to colorful screen-delivered content. Likewise, pure tones and harmonies attract our ears, so we can hack ourselves by making music, or let earbuds do the hacking. Images and sounds of people tickle our social senses, making talk shows attractive to lonely people.
The wide algorithmic lens used here shows what these hacks all have in common: information compression. Tasting sugar (and/or fat) is a quick marker of caloric food, but it’s only a marker, not a meal. Tasting dopamine ought to be the feeling of a job well done. Tasting opium ought to be the feeling of spiritual bliss. Attractive colors, sparkles, shapes, sounds and actors ought to be the first cues to interesting interactions. Unfortunately, as we know from real life, packaging can deceive, and will do so when it can get away with it.
Regardless of benefit or harm, what makes the hack a hack is the redirection of information. You can see it in how the “hackee” (the one being hacked) treats sensory input before and after. Before the hack, the hackee perceives the lure as a neutral collection of inputs to be investigated further from many angles. Therefore, the lure acts as a high-bandwidth ingredient of interactive trust. After the hack, the hackee relies on those inputs as a trusted internal marker of what it believes, or of what it wants. That is, as a fixed, compressed marker of trust.
Principles of hacking legal systems
Hacking a legal system works pretty much like hacking an insect: you shift decision-making away from nuanced, context-aware interrogation into unambiguous, unquestionable categories of true and false. Before hacking a legal system, the law views contractual paperwork as ingredients in the live human conversation about what real people said and intended. That is, the Letters of the Law (thresholds and tests) are subservient to the Intent of the Law, as evaluated by in-person human trust.
After hacking, the law views specific paper clauses as determining everything else, including whether a human has any rights at all (e.g. by replacing court proceedings with private arbitration). That is, the Letter of the Law may contradict the Intent of the Law by overriding human trust. (This circular self-validation is how nonsense arises.)
The three appeals heard by federal judges (and overheard by me and friends) each recapitulate these features of hacking. Here they are:
Case 1:An attorney for the Department of Homeland Security (DHS) of the US Government argued DHS should not be bound by previous court rulings. He insisted that a prior court ruling which established that DHS engages in retribution should be ignored because the DHS charter contains a rule against retribution. That is, the failure of DHS to follow its own rule should be ignored because that same rule says it can’t happen.
In a more chilling example, the attorney objected to the court’s requirement that crowd-control officers give audible warning to people before firing on them with weapons. The court wanted people to be able to avoid harm, but the attorney said that determining audibility was subjective, being “up to the whim of the crowd.”
The DHS attorney narrowly interpreted the don’t-fire ruling as saying only that officers should not fire into a crowd containing the individual plaintiff who had won a lawsuit, but were otherwise free to fire on crowds without plaintiffs “of standing.” When Justice Gould asked about the public’s constitutional right to experience protests free from government intimidation and “chilling effects,” the DHS attorney ignored him.
Case 2: An online company (Tile Inc.) does not want to be sued for harm caused by their product. To prevent the case from reaching court, they claim their new contractual terms banning lawsuits (in favor of corporate-friendly arbitration) hold sway. To make that claim, they insist that merely emailing the new terms to the customer was enough to make the new terms binding.
This was based on the rationale that upon receiving the email, the customer should have investigated and quit using the product. Upon hearing this argument, Judge Nguyen looked astonished and said, “I get thousands of emails a day, I could never read them all!” Exactly: the law contradicts itself. On the one hand, people are legally obligated to read every email. On the other hand, it is impossible to do so.
Case 3: A company selling so-called “educational technology” does not want to be sued for harms caused by their product. The argument is that the company, IXL (easily pronounced “I excel” in order to appeal to parents), harvests and then monetizes their users’ data without their consent. Because they are an education platform, most of their users are K-12 students. The “benefits” are suspect and the harms are real, which is why the lawsuit is necessary.
That smarmy background is necessary to appreciate the arrogance and cluelessness of the company’s following legal claims. Because a kid used the software at school, she could have read its legal Terms and Conditions. Because the parent did not pull the kid out of school, they implicitly accepted those terms. Because those terms ban lawsuits (again in favor of arbitration), this lawsuit alleging the product causes harm cannot be heard in US court. Now the parent and kid have no rights to rectify the harm, or even acknowledge it exists. The contract is so powerful, the instant your eyes behold its pixels, your rights evaporate.
This is the same deep point my partner Criscillia Benford and I spent two (unpaid) years shepherding through a prestigious AI journal. The point is worth making again: Trust is an interactive process dependent on physical context via high-speed interaction; it cannot be fixed or compressed.Compression throws away both data and interactivity, doubly undermining trust. While a compressed representation like a contract should point toward trust, to accept a compressed representation in place of real trust is dysfunctional.
Compression of data enables legal deception
The first evidence of written law in human history is Hammurabi’s Code, a collection of Babylonian laws. It outlined economic, family, criminal and civil laws — in other words, how humans interact. Contracts began in much of the same way: as written records of distinct human relationships. That is, as compressed representations. But although the contract was on paper (or clay), real live humans had to witness, write and interpret those contracts. The paper contract marked a live handshake or promise.
When the Founding Fathers wrote the US Constitution, they had human hardware in mind: in-person votes, public speech on soap-boxes, printing presses and trials in which accused faces accuser close to twelve attentive jurists. All of those in-person interactions, micro-expressions and nano-gestures provide the high-bandwidth validation of reality which any nervous system needs. Informationally, there is literally a million-fold difference between the bandwidth of a contract (a few thousand bytes of fixed text) and a sensory system processing real life (megabytes per second).
At first a contract couldn’t stand on its own, apart from the person who signed it. In case of disagreement, the contract’s counterparties could meet in person in court, and real people could decide in person whose interpretation is right. All that is changing fast.
One tipping point was the invention of the “corporation,” a fictional entity which has the same rights as a person, but is really just a set of contracts absent of heart or feeling. Once a non-human thing could have the (human) power to own and enforce a contract, it was only a matter of time before those fake-human entities also found ways to make The Law bend their way. Corporations began following written contracts more and following social contracts less. At the time of the Founding Fathers all business entities were actual people with families and opinions. Correspondingly, the main enforcement pressures were human: social contracts, social shame and threats of prison. Nowadays most businesses are abstract clouds of text with few identifiable owners and little human sense.
Governments bear equal blame for the accretion of nonsense. Once, governments merely collected taxes. Now, like administrations everywhere, governments create clouds of requirements as they try to exert more control over humans while spending less human effort of their own. The result is too many rules: each separately followable in principle, but collectively overwhelming. Paperwork is a huge help in making rules, because paperwork stays put and can be validated. A test can stand in for understanding, a certificate can stand in for competence, a waiver or disclosure can stand in for permission.
New technologies of mistrust are everywhere
Unfortunately paperwork isn’t paper any more. Compared to paper, electronic records are cheaper to broadcast in bulk, easier to lose, easier to fake and easier to use against you. And unlike paper and ink, electronic bits have no physical, testable trace of truth, and thus no trustworthiness. With paper, one often used “certified mail” or “process servers” to prove a message arrived and was seen. Now it can be enough to merely claim an email was sent based on a database entry, absent any other evidence it was seen or even arrived. But electronic bits tend to win for the simple reason that administrators receive the savings while humans bear the costs.
To be sure, electronic technology is technically neutral, at least until weaponized for gain. But that’s happening. The formerly neutral field of “user interface,” or human-computer interaction, has the active sub-field “adversarial design.” Adversarial design produces adversarial interfaces, which use persuasive technology (pixels, colors) to hack a user’s decisions against the user’s interests. The law recognizes the user’s decisions as binding, but does not notice the active deceptions which spurred them.
The worst innovation is automatic consequences. Now that machines can both record and execute, automated punishments (like red-light cameras) will be more prevalent, and will serve as precedents for even harsher auto-punishments.

California is evidence of hacking in action
Idiotic and/or dangerous rules and mandates are common in my home state of California, where attorneys craft so many laws. Each example of stupid legalism is ultimately a case of “technicality beats reality.”
Nearly every public building in California bears a sign like this: “WARNING: Entering this area can expose you to chemicals known to the State of California to cause cancer, birth defects, or other reproductive harm.” The warning is useless to humans, because it offers neither details and magnitudes of the danger nor ways to avoid it. The placard only serves to let a lawyer mark a check-box.
Electronic highway signs which display configurable messages sometimes flash two or three messages in succession, such as “Road work ahead” and “55 mph speed limit/will be enforced.” Each message is on for two seconds, with “55 mph speed limit” and “will be enforced” two seconds each. In California drivers are legally required to read such signs, but at highway speed a driver could never see all three messages even if they take their eyes off the road for all six seconds. Furthermore, the final message, “Will be enforced,” carries no information about enforcement.
Another example is the blinding headlights on the highway. Old regulations were established to keep incandescent lamps from being too bright. Thus, those regulations utterly fail at the task of reducing headlight brightness when applied to blue-enhanced LED bulbs. Someone should estimate the body count of those who crashed because of LED headlights.
Speaking of driving, across cities, Californians are seeing more and more self-driving cars. Self-driving cars are allowed to operate if a human signs paperwork taking responsibility for anything that might go wrong. But no human nervous system can move fast enough to take over driving the instant an autopilot abdicates. The waiver is merely a way to shift blame away from the car company and toward the driver, nothing more.
As a more general example, office workers are often required to memorize a new random password every month without writing it down for security reasons. No one can do that. Similarly, many online activities require clicking a box asserting the boldfaced lie, “I have read and understood this contract….” No one ever reads and understands those things. Much like how the three cases I mentioned earlier attempt to hack the legal system, California has become a victim of hacking.
How to Hack-proof the Law
I am not an attorney, so I don’t know if the following ideas make legal sense. And I am not a politician, so I don’t know if they are politically feasible either. But as a lifelong engineer and scientist I know they would have the right effect on law anywhere in the world.
The Law must understand how nervous systems interact with information flows regarding trust. For example, informational toxins ranging from harsh blue lights to sociopathic chatbots do exist and cause harm, and will continue to be invented faster than any legislature can codify and regulate them. Disclaimers are a joke when applied to subconscious manipulation. Only a principled a priori understanding of trust will do, as described in my and Criscillia’s article, Sensory Metrics of Neuromechanical Trust.
All important decisions must be made by humans meeting in physical space, with the context-aware Intent of the Law always taking precedence over any particular Letter of the Law. The Law must also recognize that not all relations are commercial contracts. Social contracts have always mattered more to society. The putative existence of a commercial contract should never override more important forms of obligation.
The current contract doctrine of consent-by-use should only apply to simple physical products whose functions and implications are obvious, such as hammers. The more abstract, complex, remote or interactive the product — online products are many steps removed from physical reality — the less presumption the user knows what’s happening, and the more responsibility the creator ought to bear. The idea that merely seeing some pixels deprives you of your rights is silly.
“Duty of care” (i.e., do not harm) should be expected of all products of all kinds, especially online products. Already, online products kill thousands of people — social media is known to have a negative effect on mental health. Such products hide out in the legal blind-spot between the immunity of so-called “publishers” and society’s blindness regarding informational toxins.
Humanity’s grand tragedy is twofold. On the one hand the laws of modern society are ever more mismatched to actual human function, and create ever more human dysfunction and misery. On the other hand the Law has always been and could only ever be driven by actual humans, its sharp edges smoothed by native human bandwidth. The Law hurts us, yet it still needs us.
[Cheyenne Torres edited this article.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
The post The US Legal System is Being Hacked appeared first on Fair Observer.
from Fair Observer https://ift.tt/gxG5muB

0 Comments