CITP Luncheon Speaker Series: Kiel Brennan-Marquez – Plausible Cause


connecting their leg that one shot
whenever control or whatever like just to get here like you know pick up the
ball crayon yeah exactly a menace on baby I session all right share or
whatever potions it’s interesting we still wait if you look at like the jobs
that are clapping and demand over like as baby boomers age she can’t hear me
caring profession jobs like home health care nurses are wonderful people my
cousin should I just project oh icic and I don’t have a problem I don’t have a
PowerPoint you so yeah yeah gonna be I’m gonna be filmed i had to sign a release
for right yeah I’m so often the black panel
that text yeah interpretive yeah we’re good great okay we’re good great thanks
for coming to today’s lunchtime seminar we have kyle brennan marquez speaking to
us today kyle is a postdoc at NYU and also a visiting fellow at the
information society has a JD from Yale as well as degrees in philosophy from
Pomona his work is looking at some really interesting topics to go to the
intersection big data and the judicial process and it’s going to be talking to
us about probable cause possible cause and how these two things relate and
should be very interesting discussion thanks for coming great yeah thanks for
having me I’m really excited about this and as bent bent helped arrange this and
he as he knows the conversations at NYU tend to be sort of very policy and law
focused with some technical background sort of as the backdrop and I think this
will be somewhat of the of the inverse and I am looking forward to that it’s
the first time I’ve actually given this this talking in that kind of room so one
thing i’ll just say at the outset is like feel free you know obviously
cordiality is appreciated but feel free to tell me that I’m like completely
wrong on technical issues or that or that some of my I’m not going to try to
address technical issues per se but that some of my policy or normative arguments
sort of run into difficulties on technical side that’s super valuable
information or kind of feedback to get so I’m going to be talking today about
kind of generally i’m going to use the the Fourth Amendment’s probable cause
requirement as a case study but this sort of general problem i’m going to be
trying to grapple with is the auto of governance functions and I’m
especially interested in areas of law and policy where two conditions obtain
the first condition is that we’ve traditionally required powerful actors
to articulate the basis of their decision making so in other words the
way that we’ve constrained the exercise of power and this in in this realm is
using some kind of explanatory standard that requires the actor exercising the
power to explain what’s going on to explain why they’re exercising the power
as they are so that’s condition one the second condition is that the nature of
the decisional task is such that the success or failure of a particular
decision is reducible to discrete criteria or in other words that the
decisional task is capable in principle of being formalized and the reason why
these two conditions are their kind of coalescence is what interests me is that
form eliza bilder cisions are susceptible you know in principle and
increasingly now in practice and going forward and many other domains in
practice to automation and and I mean and again this is where I mean like you
know push back with me on what we mean by automation in the technical side but
I’m sort of talking in broad strokes and automation or automated decisions have
the capacity to be more precise than decisions by humans that are backed up
by by explanations so automated so there are many realms in which automated
decision-making that that that that I that either is not or in some ways
cannot be explained will be more precise in the scent in the statistical sense of
minimizing false positives or maximizing the ratio of true positives to false
positives than human decision making that will be supported by explanations
so this raises a very important question which I think actually historically
we’ve really rarely had to confront squarely or in some sense we haven’t
ever really confronted squarely which is what are the values apart from
statistical precision apart from reduced error again in the specific sense of
minimizing false positives are maximizing the true positive false
positive ratio that explanatory requirements serve so you know it it’s
not a new idea to say as I’m going to argue in the Fourth Amendment setting
and more broadly that we should insist that powerful actors explain what
they’re doing that there’s important values that are advanced by that that’s
not new at all that’s central I think to the sort of liberal democratic tradition
and certainly to a lot of 20th century social theory and maybe in some ways
much more broadly than that but historically we’ve been able to rely for
in terms of finding a normative anchor for that for that proposition or for
that kind of commitment to explanation giving or the insistence the power of
Elijah’s explain themselves we’ve been able to rely on on the way in which
explanations in a kind of pre automation age actually facilitate a statistical
precision so historically we’ve thought and this is a kind of common theme again
through a lot of the liberal democratic theory that unexplained decisions when
we allow actors to make decisions and not give accounts of what they’re doing
10 10 on the margins at least to be erroneous so when when an actor doesn’t
have to explain what happened you know and the welfare adjudicator doesn’t have
to say why they’re denying welfare benefits the judge doesn’t have to give
an explanation for his or her opinion police officer doesn’t have to explain
why they want to go into the home we think well you know if you don’t have to
explain what you’re doing then the kind of prophylactic against error that comes
with explanations won’t be there and there tend to be errors it or you know
the kind of institutions of power will tend to become abusive but going forward
that set of rationale is just as a category I don’t think is really going
to work as a normative anchor for explanatory standards in fact if we if
we were to focus at a normative level on statistical precision on the
minimization of error we might actually end up wanting on normative grounds to
reject explanatory standards in a lot of settings precisely because automated
decision-making is not that either doesn’t offer an explanation or is
insusceptible even in principle to an explanation will
be more precise in some realms than human decision-making so we have to kind
of then ask ourselves you know in domains that have this character and we
can talk about which domains these really are you know what how how wide
are kind of how broad the scope of my claim here is or the kind of practical
upshot is in domains that have this character do we want to continue to
insist on explanatory standards on explanation giving practices even in an
age of automation where again the automated version that’s that’s that’s
unexplainable or unexplained in practice could yield greater precision and I
think this is really important against the backdrop of the kind of burgeoning
conversation about algorithmic governance that’s blossomed in the last
decade and especially the last five years so I’m sure many of you are
familiar with books like the black box society by Frank Pasquale and you know
that’s just sort of i’m going to use that as the example that stands in for a
whole kind of emerging genre and one of the things that defines this genre is
that they is that the proponents of greater oversight for algorithms or for
algorithmic kind of transparency and governance whom i agree with in terms of
the spirit of what they’re doing they actually focus on algorithmic in
accuracy or algorithmic imprecision as the main source of the problem or the
main reason why we would want there to be greater transparency so the point so
Frank and others will point to examples of when algorithms make mistakes often
mistakes that would have been kind of obvious to a human right which seems in
some intuitive way to enflame the error but the problem with focusing again on
algorithmic an accuracy is that if accuracy is really the fulcrum here then
the claim becomes an empirical one based on present technology and pretty soon if
accuracy is the fulcrum we’re actually going to want to favor the Machine based
solutions not the human solutions and so it seems to me that there’s a major risk
here of a kind of lack of clarity about the actual normative stakes and a
corresponding sort of danger of a Pyrrhic victory right so we get
transparent algorithms that are auditable and that allow us to sort of
purge some of the error or allow for dynamic evolution through time to deal
with the cases the Frank and others are identifying is sort of gross
inaccuracies but ultimately we don’t actually get what I think in some realms
is really the important thing which is maintaining the kind of set of
explanatory norms in the world especially for certain governance
functions so okay so that’s kind of the abstract introduction I’m going to be
talking about this in the context of the Fourth Amendment Fourth Amendment’s
probable cause requirement so I don’t know how much knowledge there is of this
in the room but the bike very high level overview is that the Fourth Amendment
requires law enforcement officials to demonstrate probable cause as a enabling
condition of performing searches and seizures and traditionally this is
enforced through the warrant requirement though there are some exceptions to the
warrant requirement but whether the police go get a warrant or they active
their own accord and then and then defend the decision against the
challenge down the line the same substantive standard the probable cause
standard applies and so the idea is that you know probable cause is best
understood kind of as an epistemic benchmark so the question is you know
the police have to make certain kinds of showings before they engage in intrusive
activity like going into a home or searching a car or performing a bodily
frisk of someone and the question is what kinds of showings are enough what
kinds of showings furnish a justification so probable cause and the
warrant requirement is really just a kind of mechanism to justify activity
that would otherwise be unacceptable or that one way to think about it is if a
private person did to you with the police did they wouldn’t be allowed to
it would be a trespass or it would be the you know a battery or you know it
would be any number of categories of crime or tort and so that when the
police go get a warrant or when they show cause what they what they are doing
is they are sort of seeking and subject to judicial approval receiving
authorization to do this thing that’s otherwise not allowed and so if we think
about it in epistemic terms then the question is sort of you know I’m going
to focus on the example of going into a home of searching a home so we say what
kinds of facts about a home about a specific residence
do the police have to show in order to make it appropriate for them to go into
the home or in order to convince a judge who ultimately is the one that gets to
decide whether to issue the warrant that it did it that it makes sense to go into
that hole that the police have authority to do it what would the judge have to
see so with that in mind let’s consider a thought experiment this is one that I
work through in the paper I don’t think I circulated the paper but this is the
sort of focal point of the paper as well so imagine that you know the NYPD wants
to wants to warrant the search of some particular residence in new york city so
i’ll take the residence where i used to live 285 court street apartment to l and
in brooklyn so in the affidavit accompanied in the warrant application
the officers set forth one and essentially only one kind of fact in
support of probable cause that 285 court street apartment to l came up on the
contraband detector which the affidavit explains is this nifty new algorithmic
tool that the NYPD developed to predict which homes are likely to contain
illegal weapons around the city and then the affidavit goes on to explain for the
judges benefit because the judge doesn’t have a technical background you know
that the kind of what went into the creation of the contraband detector in
broad strokes and most importantly the data scientists who have been kind of
called in as an independent advisory body to perform an audit of the
contraband detector to verify its reliability and they say that the
contraband detector is is reliable eighty percent of the time that is
eighty percent of the residences that it picks out are you know in fact contain
an illegal weapon and that it will continue to perform at that rate you
know going forward if not better however the contraband detector used as many
many input variables you know more than 100 and it has a very complicated you
know weighting function with all those variables the variables are all drawn
from historical policing records and basically the upshot is that when the
contraband detector picks out a residence is likely to contain an
illegal weapon it doesn’t provide any explanatory account of why it just says
you know 285 court street apartment too well came up on our on our model so and
i think when i give this talk to more of like the legal policy community
sometimes i have to like reassure people that this is a real problem we have to
think carefully about because this kind of thing is coming because lawyer
sometimes don’t believe that this is actually where the law enforcement
system is going but it’s even starker than that it’s where things have already
gone I mean I don’t know how many how much folks have been reading about this
but when it comes to bail decisions and sentencing decisions algorithms are
already in wide use and we’re starting to see the the kind of first generation
of them in the suspicion context as well but the point is that clearly this is
kind of where we’re headed given the given given the cost savings associated
with this model and given the widespread dissatisfaction with law enforcement in
this country I think there will be kind of an increasing set of pressures in
this direction following on the heels of things like ComStat and earlier
developments and the ideology of policing so the question is in my
example as I laid it out and we can kind of play with it and Q&A if people are
interested should the judge sign this hypothetical warrant application is a is
an output from the contraband detector as long as we’re satisfied as long as
the judge believes that what the data scientists have said and the audit and
and whatnot is correct as long as we’re satisfied to eighty percent really is
the number is that enough is that enough to warrant the search of 285 cord street
apartment too well or even better could in principle could the judges role in
this process actually just be eliminated right such that whenever the contraband
detector locates a residence it simply prints out a search warrant because
after all if the point is that the eighty percent benchmark is sufficient
why do we even really need the judge as long as long as we have checks in place
making sure that the algorithm is you know kind of performing well right you
might have an auditing scheme in place kind of as a background thing but the
judge in any in a particular eyes dwai in each case isn’t really playing any
role here so we could just imagine this not a contraband detector but instead
you know a probable cause machine a warrant machine so my my claim is that
the answer these questions is no that the output from the contraband detector
is insufficient and so the interesting question is why and I think this is
actually at least in the legal community a very widespread intuition and so the
question is why and the basic reason I think
is that and again I’m using the Fourth Amendment as the case study but we’re
going to try to zoom out as well is that even though the phrase probable cause
seems to have some kind of valence to mathematical conceptions of probability
it seems to convey sort of statistics or numbers based model at least in
principle then it’s really about explanations that the point of the
standard is not and really has never been about maximizing the statistical
precision of law enforcement as an enterprise minimizing the false
positives in the sense of minimizing the number of innocent people or innocent
homes or innocent targets that are sort of swept up into the net of law
enforcement instead the point of this standard and I think about of many other
governance standards along these explanatory lines is accountability it’s
about requiring the police to articulate their reasoning in specific cases and as
a consequence probable cause or plausible cause as I call it in paper
determinations can never be fully automated even if automation were to
promise a major reduction in false positives or even if it would reduce the
number again of innocent people or target subject to intrusion it would
subvert the Fourth Amendment’s broader governance goals so I’m I’m happy and in
fact I’d be very interested to talk during the Q&A about what I mean exactly
when I say the probable cause determinations can’t be automated or
what automation what what automation involves yeah yeah could we have
computer come up also with an explanation yeah so let’s table that and
talk about that because I because i’m definitely not trying to argue in this
piece or more generally that we need to banish these tools but or even try to
argue that the tools themselves couldn’t substitute fully for certain aspects of
the process but the broader question though that i want to ask is what are
the governance goals besides statistical precision that explanatory requirements
like the Fourth Amendment’s probable cause standard serve so what would be
lost if we just did fully automated if we had the warrant printing machine
based on some numerical threshold that we had decided for some
set a normative reasons was enough to sit to satisfy you know the Fourth
Amendment understood as a as a precision metric what would be lost what would be
what would what would be the consequences for the law enforcement of
the kind of governance system as a whole that we might be worried about so I’m
going to talk about four consequences and I think that not all of them are
going to show up in every governance domain it has even the characteristics i
was describing at the outset i think all four do actually show up in the law
enforcement domain which is what makes it a helpful case study but the point is
not that you know all for that there’s a combination of the four that should make
us worried it’s that I’m trying to catalog what the governance goals of
explanation giving are again apart from statistical precision because I think
that will provide a set of metrics for having a more coherent and and kind of
holistic normative discussion about when algorithms can substitute for human
decision-making and when they cannot so the first governance goal is a very I
think familiar one now which is just the regulation of input variables so like we
know that there are certain kinds of input variables traditionally at least
that oh and likewise certain forms of reasoning we might say that are worthy
of scrutiny regulation and and and perhaps ought to be forbidden totally
independent of their probative value or of their predictive power their
relationship to them to to the metric I’ve been describing this far of
statistical precision so you know the obvious example is race or like race and
gender close proxies for race or gender religion these kinds of sensitive
categories that traditionally and certainly it’s the foundation of
anti-discrimination law and policy but it also shows up another is a lot
definitely an evidence law that we know that using these kinds of variables and
I’m talking sort of in the pre automation world just as a general
matter that we we think that it is inappropriate in some contexts or at
least worthy of a significant regulation to the extent that it is appropriate in
any contexts to use variables like this as a base
is for sensitive distributive decisions like who gets you know who was subject
to intrusion by the state and again we think this is true not because race and
religion and gender are necessarily weakly correlated or a low predictive
value in these domains though that may also be true in some domains I mean we
have suspicions along those lines as well I think right that they’re kind of
too easy proxies right and maybe they’re actually not part of the picture in
reality at all but that even if they are part of the picture in reality even if
they do bear a kind of significant statistical relationship to the outcomes
that we care about we still think that there are worthy of again regulation of
not kind of total banishment from the process and some extent of course this
problem is going to be amenable to technical fixes which I’m I’m interested
in hearing about and the question of what counts as a proxy variable in a
world of powerful sword machine learning algorithms is an interesting is an
interesting question unto itself so what we mean when we talk you know we we kind
of know in an informal way and like employment discrimination you can’t use
gender and you also can’t use like pregnancy right that’s clearly a proxy
variable if we had an employment algorithm that picked out you know a 50
variable matrix that had him radically disparate impact along gender lines you
know to the tune of 100 0 with that with that matrix of 50 variables qualify as a
kind of master proxy variable I’m not sure I mean I think it kind of turns on
whether we think of proxy variables as a technical category or a normative
category or both so anyway that is governance going from regulating input
variables governance goal number two is constraining discretion so one thing
that explanatory standards do is they require actors as a practical matter to
expend resources in advance of decision-making to build their cases to
come up with explanations and this helps ensure i think that consequential kind
of sensitive distributed decisions like again who is subject to intrusion by the
state not occur to readily or to programmatically and this is actually a
really key Fourth Amendment value i think and one that we’ve seen really
come to the surface recently and some of the developments at the spring
court level and sort of throughout the federal courts in response to electronic
and digital surveillance so the court has debt has has shown a propensity for
ratcheting up legal protections specifically in the Fourth Amendment
context for kind of expanding what is subject to the probable cause
requirement in the first instance in response to technological changes that
make surveillance or kind of monitoring our data extraction cheap and easy so in
a case called United States v Jones a couple of years ago the court said that
it qualifies as a search for the police to use a GPS monitoring device to follow
your car around on the public road so even though the police cannot can
obviously follow you in a patrol car basically as much as they want as long
as you’re on a public road and that’s sometimes how they filled their cases
they can’t just put a GPS on your car or use the GPS that someone else put on
your car I would I would say as an obvious extension of the case in order
to effectively do the same thing likewise a couple years after that in a
case called Riley V California the court says that it counts as a search for the
police to look through your smartphone incident to an arrest so the traditional
rule is that if you get arrested it’s kind of fair game for the police to
perform a frisk and to search for things that are on your person and the court
says you know and the police have word nationwide kind of claiming that
authority or that rationale for performing like full searches of
smartphones and the court said no no the search incident to an arrest rule
doesn’t extend it a smartphone there’s just too much information on the
smartphone the basic upshot of both of these cases and then there’s a bunch of
other things going on the lower courts which I’m happy to talk about if people
are interested is that surveillance gets much easier when technologies like this
exist and so we have to kind of have hydraulically constraining rules in
place and to go back to the to the governance goal this notion of
constraining discretion I think that what’s really important here is to see
that cases like Jones and Riley this idea that we need legal rules to sort of
operate as a bulwark against technological change that makes
surveillance too easy it’s partly about you know just minimizing the amount of
surveillance that happens right minimizing the amount of information the
government has access to and so on but also and I think
in some ways more fundamentally in terms of what the Fourth Amendment and these
explanatory requirements are really trying to do in terms of reining in
power that it’s that it’s it’s also about discretion so when technology
makes surveillance too cheap like in the case of GPS monitoring or like in the
case of the contraband detector type of example when it makes it too easy to
justify further intrusion so it’s not that it makes it’s not that enables
surveillance but that enables you to to to generate the legal justifications
necessary in order to perform surveillance that the effect is the same
that the police have effectively kind of blanket power to make choices about whom
they’re targeting which homes they’re targeting which people they’re targeting
absent the fairy form of kind of judicial supervision that we think the
Fourth Amendment is supposed to supply so to put it more concretely we might
say in the GPS case you know one reason why we don’t want the police to just be
able to use a GPS monitor monitoring device in your car is that they’re going
to start monitoring everyone and they’re going to know everything about everyone
another reason why we might not want it which i think is analytically
independent is that they’re going to get to choose to monitor whoever they want
and it may not be everyone it may be people of a particular religious or
political affiliation and maybe people they don’t like it may be people who
they suspect of you know crimes that are they’re totally delinked from the normal
sort of law enforcement priorities are and so on and so forth and so the part
of what the Fourth Amendment in particular and I think explanatory
requirements generally are doing is they are making that kind of decision where
the police are making decisions about priorities and whom they’re targeting
and whatnot kind of hybrid in the sense that it’s partly to the discretion of
law enforcement but it’s also partly constrained by in this case the sort of
judicial oversight but it’s partly constrained by some form of public
oversight of some kind of check on that power and what we should be really
concerned about in the case of something like the contraband detector I mean
among other things the whole point is we should be concerned on all these
dimensions that I would think but what we should maybe
in some ways be the most concerned about is imagine if the contraband detector
picked at 10,000 homes imagine it’s not even eighty percent it’s ninety-five
percent or ninety eight percent or you know something very very high if that
were sufficient as kind of a blanket authority to perform searches in any of
those homes one thing that might happen is just like analogous you know to the
GPS monitoring the police might go into every one of those homes right that that
could happen in practice what’s much more likely to happen is that the police
will now have sort of in their back pocket I a justification a ready-made
justification whenever they want to go into any of those 10,000 homes and what
they’ll start doing is engaging in much much more intelligent targeting of the
homes because after all it’s not in the police as interest to go into ten
thousand homes it’s very costly to you know what they’re going to do is they’re
going to say great we have authorization for 10,000 homes whatever we want it we
don’t even have to get the warrant now we can just any time we go to court or
to the warrant machine will be able to get a piece of paper that allows us to
go into the home and that will now restructure our sort of incentives and
institutional dynamics around investigation I think that is exactly
the kind of harm or are the kind of unfettered police discretion that the
Fourth Amendment is is really about at least at its origin and I think through
to today exactly that you’re making about Katrina discretion doesn’t
necessarily imply that you need a human in the loop right so you can you could
require the algorithm to be boosted up to a level such that you have a number
of homes that is reasonable for the police to search or you could constrain
the discretion through a statute that says that you have to randomly select
among these homes herbs that affect algorithm and algorithm itself is
constraining police discussion because you know presumably if I could allow
warrants for all homes absolutely there’s a lot of ways yeah there’s a lot
of ways that we and and and one thing I want to say about about all these
governance goals that I’m identifying is that I’m not saying that by by
identifying them as a normative rationales that I think have kind of
supported explanatory standards in the past I’m not saying that explanatory
standards are necessary to vindicate these goals
right we could imagine vindicating them in other ways but the what’s concerning
about losing the explanatory standard is that we then are moving from a world
with an explanatory center to one of kind of full automation without any of
the same kind of backstops in place is that we is that we lose sight of these
goals and that then we have to have a conversation about how to keep them in
and one way to keep them in is to have explanatory standards kind of preserved
even in the age of automation but that might not be the most efficient way or
the most sensible way some in some rounds right i mean in some ways you
know the water requirement i I mean I’m kind of giving a lot of Tory I’m trying
to sort of save the Fourth Amendment jurisprudence from itself and the way in
the sense that if you you know like I could have a co panelist here who works
on these same issues and they would say you know Fourth Amendment raining
indiscretion like give me a break the Fourth Amendment has been used as an
instrument of power that’s essentially a rubber stamp for police this war you
know you’re making a lot of people would say that algorithms could constrain this
right right exactly so so so I think that it’s not so much time saying
explanatory standards are the only way to do this it’s that it’s that they help
us see what’s been going on behind the scenes besides just statistical
precision or like the minimization of error as it governance as a governance
goal and then they kind of force the question of in a world where we have
these automated decision-making systems that are very good on the precision to
mention what else do they need to involve it whether it’s a human in the
loop or its constraints on the decision-making system itself that are
designed along the lines of these goals to secure the goals in the way that the
explanatory standard you know used to lv it in perfectly I don’t think that’s all
you’re doing but I guess could you explain a little bit more what what the
end goal is constraint of discretion and end goal in itself or you could also
argue about that you want to have discretion be limited so that you can
have improved accuracy so police are right I target people unfairly this is
probably also less accurate over there the population as a whole I would argue
and this is actually a new new paper that I’m working on um that
constrain discretion is intended unto itself in the sense that it’s like
central to liberal democratic norms to the notion of a liberal democratic
society to have constraints and power as such it is not at this it may also be an
instrumental value in fact it surely is in some ways and when historically one
of the ways in which it’s been an instrumental value i think is exactly
along these precision lines right but i actually think that it has some
intrinsic function so okay so governance goal–one regulate input variables
government’s goal to constrain discretion the third governance goal is
thinking about measures of performance so in some ways unpacking what we mean
by things like accuracy when we talk about algorithms would be more accurate
so you know I’ve been using the language of precision on purpose because you know
precision statistically is only one way of talking about performance we’re
talking about accuracy so another way to talk about I mean there are multiple
measures of it but but another I think equal important way is to talk about a
different kind of statistical property which we could call or you know
statisticians refer to a sensitivity or recall which is not a comparison of true
positives to false positives but a comparison of true positives to false
negatives so it’s not a question of how many wrong cases are you sweeping in to
your selections we you know using your selection method but instead of how many
how many how many right cases are you are you skipping over um this is I think
actually extremely important because in the law enforcement setting and other
governance settings it goes to what systemic priorities are what enforcement
priorities look like so imagine you know building on the again on the contraband
detector example imagine if if you know for some reason we have grounds to know
that in New York City half of the homes that have illegal weapons in them have
illegal weapons because they’re connected to street crime in some
fashion or another half of the homes that have illegal weapons in them have
illegal weapons because at least someone who resides in the home is a member of
the NRA and as like a ideological objection to registering their firearms
so under the local New York penal code visit us this is the same
right but depending on the training data that the algorithm has learned on it may
be capturing both of these types of cases or may be capturing only one if
it’s learned on historical policing records given when we know about
policing it’s likely that it’s only capturing the first seven cases maybe
not I mean that would be the question right but the point is that from the
fact that the algorithm performs at a high level of precision from the fact
that it’s eighty percent precise we actually don’t know anything about the
other dimension we don’t know anything about the false about about the quantity
and type of false negatives that are involved in this kind of decision Allah
at large and this is this is a tricky problem to relate back to things like
Fourth Amendment law and other individual rights based grievance kind
of resolution mechanisms because we don’t typically talk about false
negatives in the Fourth Amendment setting because the pulse negative means
the police didn’t hassle you right false negatives means the police didn’t go
into your home even though there was contraband at your home or the police
didn’t frisk you even though you had contraband and so that of course doesn’t
give rise to a Fourth Amendment violation in the usual sense on the
other hand and again this is another paper I’m working on I would that same
paper i mean i think that the Fourth Amendment and a kind of regulation of
law enforcement legal rules related to law enforcement power and surveillance
are about more than just for dressing individual grievances that they’re
actually about keeping checks on the law enforcement kind of system as a whole
and from that perspective I think most of us at least I and I think this is not
a this is not a like a minority view we care about what I’m calling false
negatives here meaning we care about what like law enforcement priorities
look like you know if the algorithm if the contraband detector we’re finding
only the street crime cases and not the NRA cases or vice versa it would be the
equivalent of the police you know only policing in exclusively policing in one
neighborhood exclusively policing the project’s exclusively policing downtown
but when they do so they do so very precisely so their filters it within
that context are good at sorting out false positives in the sense that they
don’t they don’t intrude on many innocent people’s lives but of course we
would kind of take a step back I think and say well is that really what we want
the police to be doing maybe maybe but I don’t think it’s I don’t think it’s
self-evident that the fact that they are precise in that
main means that the allocation of law enforcement priorities across the board
is an acceptable one in fact I don’t think you could you could you could
derive a justification for law enforcement kind of priorities across
the board from anything you know about how precise it is the two are just
operating on different dimensions and so keeping and and and they’re both
important dimensions right it’s much better to have a world where even if law
enforcement is only going into some communities and not others that it’s at
least being precise when it does that that’s better than the world where
they’re only going into some communities and being in precise which is I think
arguably at least in some cities kind of what we see now but the point is that we
don’t want to just go sort of full bore toward precision as the only goal here
and that if we were to use the contraband detector because of its
precision we would actually be sort of subsuming this governance decision
potentially into the tool itself so the tool would actually be in in guiding law
enforcement would be making our sort of substituting for this set of choices so
that’s governance function number three and then the fourth governance function
maybe this is the sort of squishiest or certainly the place where i have the
least understanding I don’t have a great understanding to begin with of the
technical side of this but I have the least understanding of what the
technical components of this would really be but the fourth governance
rationale would be that I think that there’s many domains and I think
policing is one of them where we actually want there to be incentives in
place for the actors involved to learn from the machines so we actually to the
extent that the that that that that algorithmic tools or to the extent that
machine learning techniques are able to sort of cast light on patterns in the
world that we didn’t previously understand where they diluted human
observation we actually want those new insights to the extent possible and this
kind of comes up against the technical question I know to be reinvested back
into the governance system as a whole so we actually don’t want to delegate away
authority to machines partly on the ground that we don’t want the humans you
know we don’t want to to have happen to the law enforcement system what’s
happened to all of us with respect to like memorizing telephone numbers or
knowing our way around or made more speed for myself like knowing our way
around you know instead of using google maps
like the point is that there’s a like in in automating you delegate away
functions that are that are inefficiently costly for humans to be
doing it makes a lot of sense that I don’t have to memorize found overs
anymore that was a terrible waste of everyone’s you know labour-power I would
say but I’m not sure we want the same kind of dynamic to take place in
governance systems and in law enforcement system right so we don’t
want the police to sort of lose sight of what’s going on in the world the other
aspect of this is that even put it aside whether whether there’s a capacity to
have these tools sort of improved law enforcement as an enterprise there’s
also a question of whether the insights that the tools generate are best suited
for the law enforcement system or actually present other kinds of social
issues that we want dealt with and through other governance structures so
imagine for example if what the contraband detector really picked up on
if we if we imagine this was in the historical records that it’s using just
hypothetically that what it picked up on was that actually apartments that
apartments were very likely to contain an illegal weapons if they had a
resident in them who have been exposed to sort of like lead paint let’s say
when that resident was young right which we know this is this is of course so I’m
imagine a world where that was not already a sort of commonly understood
insight about with the like urban crime patterns but imagine a world where we
didn’t know that yet so the contraband detector had effectively found that
right I mean it may have found that in a way that’s very hard to discern may be
impossible to discern for a human because because of because of what the
tool looks like again that’s the technical question but in principle I
would want to say that the world where the contraband detector found that we
would want that that that new data point to become explainable not necessarily
because we wanted to inform the law enforcement process but because we
wanted to inform other processes so maybe it may be that it’s correlated
sufficiently you know to to contraband and illegal weapons or you know whatever
the specific crime is that we want that to be involved in the policing process
but it may just be that we actually don’t think the police are the best
suited to deal with this and maybe actually that we that we think the
police would be counterproductive but the point is that we actually want
want or you know we have to be able to understand what the basis of the insight
is that the algorithm is working with in order to make that kind of allocated
decision between governance structures so as I said I’m not trying to say that
all of these governance goals are present in all domains or that there or
that they are or that explanatory standards have been trying to do all of
this work historically the point is again just to sort of catalog what it is
we care about besides besides minimizing false positives which I think has been
for a variety of reasons largely the focus of the discussion because on the
one hand that the for-profit entity is often that are developing these tools
have an interest often in seeing the upside the financial and other upsides
of the gains in precision and on the other hand because that’s the form that
a lot of these tools are are taking that that’s where the governance discussion
has gone and which i think is kind of led to the problem that I opened with in
terms of thinking about this in relation to the other literature namely that
folks like Frank less quality and others are sort of focused on are trying to
stem the tide I think of what they see is kind of techno utopian algorithm
enthusiasm right and say look you know these things make mistakes these things
can’t be trusted we need to impose transparency requirements we need to
have all the usual kinds of due process stuff in place and again I don’t
disagree with the spirit of that it’s just that the reason why we need that
can’t really be in the long term at least in many domains that the things
make mistakes or that they can’t be trusted because the humans make mistakes
and the humans can’t be trusted and we know that and that’s actually central to
many of our theories of sort of liberal democratic society in the way that we
allocate power and so then the question is you know what what else is going on
here so that’s that’s what I’m trying to canvas so one way to think about it kind
of at a high level abstraction and then we can move to to Q&A is that I said at
the beginning that you know i’m interested in domains where where
there’s a trade-off between kind of performance and explain ability
you know and I think in some ways what this is about is that there can be a
tension between the success or failure of particular decisions being reducible
to discrete criteria and form eliza bowl there are some domains in which that
condition holds and yet the maximization of success as to particular decisions
will actually end up subverting some of the broader goals of the decision-making
system as a whole and I think there’s an interesting open question as to whether
that really just means that success and failure of individual decisions can’t be
reduced to discrete criteria or reduce to discrete criteria that are not just
themselves reducible to precision but instead we would have to have cry to you
know we have to have kind of formalized criteria that actually pick up on these
various dimensions and one way or another but my sense and this is kind of
this is the most speculative part here that I’m working with is that there are
times what makes it such a vaccine and interesting set of problems is that
there are times when success and failure of particular decisions really can be
reduced discrete criteria we can say for example in Fourth Amendment setting I
think it’s again a good good illustration of this we know what makes
for us except what what sort of makes for a successful search or seizure it’s
it’s a search or seizure that actually finds contraband it actually finds
evidence of a crime that is what the police are trying to do and yet
maximizing along that success failure kind of metric will perhaps lose sight
of these broader governance goals that the traditional approach to the
success/failure sort of maximization has also been serving and then we actually
care a great deal about so with that I’ll close and I’m looking forward to what everyone has to say thanks you

Leave a Reply

Your email address will not be published. Required fields are marked *