Monday, October 6, 2014

SELF DEFENSE AGAINST DRONES - HUNTERS USING DRONES TO KILL - HUNTING DRONES

 

When I was young, this would have been

A NIGHTMARE

Now, it is real.

 


 

Scarcely a week goes by without a story in the news about drones, whether it is a Senator finding a drone peering in her window, or a small town in Colorado discussing whether to offer drone-hunting licenses (in the end they voted not to). The fear that a drone may be watching you is far from unreasonable. Today's news, for example, is that up to 20 percent of the Border Patrol predator-drone flight hours take place in the US; meanwhile, in Miami where we both live, the police department has a fleet of drones out on patrol.

This week's arrest of a man who took a shotgun to an airborne drone is only one of the most recent warnings that we need better legal rules — and better social norms — about drone overflights, and that we need them now both to prevent harm to people and to prevent wrongful shootdowns. Similar, if less dramatic, questions apply to dangers posed by other robots, such as driverless cars. As it happens, we have some suggestions for rules that would apply to all robots, whether autonomous or remote controlled, that pose any potential threat to life or limb, to property, or to privacy.

Our recent paper Self-Defense Against Robots, which we presented at the WeRobot 2014 Conference held at the University of Miami School of Law focuses more on civil remedies such as tort law than on criminal law issues. (The New Jersey case made the news because the shooter was charged with criminal mischief and possession of a weapon for an unlawful purpose.)

Our proposals for limited self-help rules would, we believe, create reasonable incentives to use robots responsibly where they might impinge on others' property. Equally importantly, our proposals that drones be marked in ways that make their capabilities clear would go far to defuse otherwise understandable fears that robots may be spying on people. And in the cases where the drones are in fact capable of spying on people, our proposals would give the drone's targets more of a warning that their privacy was endangered — a warning that would help restrict self-help countermeasures to the cases where they are appropriate.

The first part of our argument is that, as a general matter, when a person fears for her safety, property, or privacy, the same self-help doctrines that govern other issues should govern a person's use of self-help against a robot, whether that robot is operating on land, air, or sea. That is, an individual threatened with harm should be able to employ countermeasures that are reasonable in proportion to the harm threatened. The rule shouldn't be different just because a robot poses the threat. Thus, as a general matter — but subject to some pretty important exceptions — a person who reasonably fears harm from a robot has a right to act to prevent that harm, up to and even in some — but far from all — cases shooting it down.

The fact that it is a robot posing the threat is significant. The law treats robots as property. (If robots ever achieve consciousness, or perhaps even the ability to simulate consciousness convincingly, we may need to revisit that status, but that day likely is far off, and we're concerned about the present.) Acts of self-defense that would be unreasonable when threatened by a human will in many cases be reasonable — in an otherwise similar situation — in response to threats from a mere chattel. The toughest question is the scope of permissible self-help when individuals fear for their privacy rather than for their safety or property, and we'll get back to that below after we talk about the easier cases.

The law puts a much higher value on life than on property. Thus, an individual who fears for her bodily safety — or that of another — has the most freedom of action when she is acting in self-defense against a threat posed by a chattel.

When only property is threatened, the test for lawful self-help becomes one of reasonableness. A person may meet a threat from a chattel (the robot) with a reasonable amount of force, a test which the law largely makes into a cost-benefit analysis. Thus it becomes primarily a matter of economics: In general, a person defending her property cannot cause (or, more complicatedly, cannot risk the expected value of) more harm than the threatened property is worth, within the margin of error stemming from the nature of the reasonableness test generally, and the law's allowance for what will inevitably be hasty decisions, made under imperfect conditions, with imperfect information.

What that means in practice will obviously vary enormously with the facts. Among the issues that are most likely to be relevant are the value of the property being protected, the apparent value of the drone to the (often limited) extent the defender can tell, the urgency of the threat, the availability of less drastic protective measures, and — not least — the extent to which there is a reasonable danger that the self-help measure might cause harm to bystanders or their property.

To make this long list more concrete, a mere technical trespass — an overflight — without more, will never justify shooting down a drone in an urban area as the risk to others is too great. It's unlikely that courts will be at all sympathetic to people who blast drones out of the sky when those drones go on to hit a neighbor's house or car, much less when they go on to hit the neighbor herself.

Thus shooting at drones or other robots in urban areas is unlikely to be reasonable unless the risk posed is either to a person's safety or perhaps to some uniquely valuable object. The calculus may be different in a rural place where the risk to third parties is negligible; in the country the scope of self-help to protect property may be broader, but even here a mere technical trespass — an overflight without something more — will not justify damaging the drone because the damages of a technical trespass are usually considered to be small, and the value of the harm to the drone will inevitably be greater than the trespass damages.

There's a further complexity. Because of a complicated interplay between FAA rules and state law, which we detail in our paper, there will often be uncertainty about whether there is some height above which drones could fly with impunity. Where once upon a time a landowner's property rights extended to the moon if not beyond, that rule died with the age of air travel. Manned aircraft, even quite low-flying ones, are not trespassers — at least so long as they comply with FAA rules on overflights, which usually means 500 feet or higher. Helicopters can go even lower without trespassing so long as they are operating in a safe manner. As a result, unless the drone is quite low, there may not be an actionable trespass at all. And if there is no trespass there may be no intrusion justifying self-help.

Worse, it gets even more complicated and interesting where the threat is (or reasonably seems to be) to privacy. A trespassing, spying drone can do a lot of damage, but privacy harms are hard to monetize, especially ex ante. That means it is hard to weigh the potential damage against the harm that the self-helper risks doing to the offending chattel. Not only is privacy is hard to value in general, but in this case the victim cannot know in advance how the operator of the drone intends to use the photos, hacked wifi, or whatever the drone may be collecting.

In light of this uncertainty piled on difficult valuation, we argue that the scope of permissible self-help in defending one's privacy should be quite broad — else privacy will inevitably lack the protection it deserves. There is exigency in that resort to legally administered remedies would be impracticable — the drone will be long gone — and worse, the harm caused by a drone that escapes with intrusive recordings can be substantial and hard to remedy after the fact. Further, it is common for new technology to be seen — reasonably — as risky and dangerous, and until proven otherwise drones are no exception. At least initially, violent self-help will seem, and often may be, reasonable even when the privacy threat is not great, or even extant, at least when the risks of collateral damage are small.

Spy drones that stand off the property line, or that fly above whatever the limit of the vertical curtilage, are legally akin to paparazzi. At present the FAA has rules limiting commercial drone use, but hobbyists today and perhaps commercial drone operators in the near future that avoid overflying private property may have legitimate claims to First Amendment protection for at least some photography. That said, we understand why people would be concerned to learn that drones might someday aim telephoto lenses into their bedrooms from the sky. At some extreme point, exotic torts such as 'intrusion into seclusion' or intentional infliction of emotional distress may come into play.

The current, and still somewhat preliminary, draft of our paper addresses seven specific issues we identify in current law:

  1. Because both self-defense and defense of another person are privileged when a mere chattel reasonably appears to present a physical threat, some people may be too willing to destroy robots when they feel threatened by them, and the law will tend to permit this;
  2. Because it will be difficult for the average person to know the capabilities of an unfamiliar robot — something essential to making good judgments of how dangerous the robot might be — some people will over-protect their property against damage from robots. What is more, so long as this uncertainty about robot danger (whether as a class, or in specific cases of ambiguously dangerous robots) is widespread, tort law will tend to treat this over-protective behavior as "reasonable" and thus privileged;
  3. Relatedly, the great difficulties in assessing the privacy consequences of a robotic intrusion will also lead people to err — reasonably — on the side of caution and thus self-help. To the extent that tort law recognizes a right of self-help against privacy intrusions, the law will tend to privilege that conduct also;
  4. These considerations will apply even more strongly to aerial robots (drones): people will have significant practical difficulties in identifying and assessing the position, actions, and capabilities of aerial robots. The resulting uncertainty will make some property owners too willing to take offensive action in perceived self-defense. Tort law is likely to be solicitous of the property-owner's need to make quick decisions under uncertainty. That solicitude will not, however, extend to actions that presented a reasonable risk of danger to third parties, such as shooting into the air in populated areas.
  5. As noted above, there is uncertainty as to the vertical perimeter of property, something people will need to know in order to determine when an aerial robot is committing a legal trespass.
  6. The law is unclear as to the extent of the privilege for self-help in the face of privacy torts like intrusion upon seclusion.
  7. Under tort law principles, a person's privilege to defend her property by harming a robot reasonably perceived as dangerous will turn on the value of the robot as much as on the value of the property being threatened. A person can be expected to know the value of the property she is protecting, but the law will recognize that it will be difficult for the canonical ordinary reasonable person to make an estimate of a robot's value in a timely manner during an emergency. If courts attempt to rely on the reasonably perceived value of the robot, then that creates incentives for robot designers to make their robots look more expensive than they are. Encouraging gilding of robots in order to make them resistant to self-defense predicated on tort claims of property damage seems undesirable.

Our initial set of proposed solutions to these problems begins with the observation that most of these problems spring from some kind of uncertainty about, or relating to, robots. We therefore suggest measures to reduce those uncertainties.

For starters, we suggest a total ban on weaponized robots in the US. A blanket ban would remove one of the greatest threats people might otherwise perceive, and make it much less reasonable to respond to a large class of robot intrusions with force.

We also propose that all mobile robots should be required to carry warning markings, lights, and the equivalent of a Vehicle Identification Number (VIN) that would be recorded in a state or national registry. At present, there's no practical way to tell a robot's capabilities by looking at it (and drones can be hard to see, especially in the dark). Although far from perfect, these notices would be calibrated not just to warn of the drone's presence, but also to say something about its capabilities, such as whether it carries a camera, and whether it is capable of capturing sounds or wifi or other information.

Setting up a licensing regime and national or state-based registries would help connect a malfeasant robot to its owner or user, but no single system is likely to work in all circumstances. Because drones can be small and may be used outdoors in low-light situations, license plates or airplane-style markings alone may be poor solutions; conversely, license plates or markings should work well for larger and purely terrestrial robots. In addition to markings, all aerial robots should be required to carry an active RFID chip with the maximum practicable range given the state of RFID technology. The range required could be adjusted annually for new robots based on improvements in RFID technology until the range slightly exceeded the minimum height requirement that the FAA likely will establish for drones.

No discussion of a notice regime would be complete without some discussion of cheating. Notice regimes are ineffective when there are a sufficient number of bad actors. In a world with widespread cheating, notice is not reliable, so it becomes more reasonable to look at all drones as potential threats. Even a relatively small number of bad actors — liars — can undermine a notice regime if they cause dangerous false reliance. Enforcement of disclosure rules for robots in general, and drones in particular, will be difficult, but civil and even criminal penalties for false statements may be in order.

Were we to transition to a legal regime in which the default rule privileged reasonable self-defense, but the owner-operator's standardized and intelligible declaration of harmlessness made self-defense presumptively unreasonable, then a false statement of harmlessness should be considered fraud or worse. We propose that the penalty for misidentifying a robot be comparable to that for falsifying or obscuring a license plate, and that the penalty for falsifying or altering a robot's internal unique identification number be equivalent to the penalty for altering a VIN.

Our goal in making these proposals is to help create a legal climate in which people and robots can best flourish. Robots, including drones, have a great deal to offer; we would like to maximize the benefits of this rapidly spreading technology while avoiding, as much as possible, its potential dangers

http://www.washingtonpost.com/news/volokh-conspiracy/wp/2014/10/03/self-defense-against-overflying-drones/

No comments: