Taboo Trades
Taboo Trades
The Fight For Privacy with Danielle Citron
In this episode, my great friend and colleague, Danielle Citron, joins me and UVA Law students Gabriele Josephs and Aamina Mariam to discuss her latest book, The Fight for Privacy: Protecting Dignity, Identity, and Love in the Digital Age (W.W. Norton, Penguin Vintage UK, 2022). Danielle Citron is the Jefferson Scholars Foundation Schenck Distinguished Professor in Law and Caddell and Chapman Professor of Law at UVA, where she writes and teaches about privacy, free expression and civil rights. Her scholarship and advocacy have been recognized nationally and internationally. She is a 2019 MacArthur Fellow and the Vice President of the Cyber Civil Rights Initiative, which has been advocating for civil rights and liberties on equal terms in the digital age since 2013.
Her latest book, The Fight for Privacy: Protecting Dignity, Identity, and Love in the Digital Age (W.W. Norton, Penguin Vintage UK, 2022) was published in October 2022 and has been featured and excerpted in Wired, Fortune, and Washington Monthly, among others, and named by Amazon as a Top 100 book of 2022. Her first book, Hate Crimes in Cyberspace (Harvard University Press, 2014), was named one of the 20 Best Moments for Women in 2014 by the editors of Cosmopolitan magazine. She has also published more than 50 articles and essays.
Show Notes:
Citron, Danielle Keats, The Surveilled Student (August 25, 2023). Stanford Law Review, v. 76 (Forthcoming) , Virginia Public Law and Legal Theory Research Paper 2023-61, Available at SSRN: https://ssrn.com/abstract=4552267
The Fight for Privacy: Protecting Dignity, Identity, and Love in the Digital Age (W.W. Norton, Penguin Vintage UK, 2022)
Hate Crimes in Cyberspace (Harvard University Press, 2014)
Danielle Citron: This is what happens in your BFF's interview. View for a podcast. This is the fun that we have. Hey.
Kim Krawiec: Hey, everybody. Welcome to the Taboo Trades podcast, a show about stuff we aren't supposed to sell, but do anyway. I'm your host, Kim Krawiec. I'm thrilled today to welcome my great friend and colleague Danielle Citron to discuss her latest book, the Fight for Privacy protecting Dignity, Identity, and Love in the Digital Age. Danielle is the Jefferson Scholars Foundation Schenck Distinguished Professor in Law and Cadell And Chapman Professor of Law at the University of Virginia, where she writes and teaches about privacy, free expression and civil rights. Her scholarship and advocacy have been recognized nationally and internationally. She is a 2019 MacArthur Fellow and the Vice President of the Cyber Civil Rights Initiative, which has been advocating for civil rights and liberties on equal terms from the digital age since 2013. Her latest book, The Fight for Privacy Protecting Dignity, Identity, and Love in the Digital Age, was published in October 2022 and has been featured and excerpted in Wired, Fortune, and Washington Monthly, among others, and named by Amazon as a Top 100 Book of 2022. Her first book, Hate Crimes in Cyberspace, was named one of the 20 Best Moments for Women in 2014 by the editors of Cosmopolitan magazine. She has also published more than 50 articles and essays.
Kim Krawiec: You okay? So, thanks, guys, for joining me today to have this conversation with Danielle Citron. I'm really excited about it. I'm sure that all of you are as well. First, why don't I get both of you to introduce yourselves to our listeners?
Danielle Citron: Yeah.
Aamina Mariam: So. My name is Amina Mariam. I'm a two L at UVA, and I'm really excited about this conversation because I think it wasn't something I really considered or pondered about deeply prior to reading Daniel's book.
Gabriele Josephs: I'm Gabriele Josephs. I'm a 2L. And I had the great fortune of being a fellow with Professor Citron. And so I'm looking forward to hearing more of her insights on privacy and how we protect our data and our most intimate great.
Kim Krawiec: Amina, to your point, as Danielle is well aware, I'm nearly a complete ignoramus on anything having to do with Internet and speech regulation, despite her best efforts to teach me something. So pretty much what is in the book and in some prior papers of hers that I've read is the extent of my knowledge. So this will be an interesting discussion for all of us. So the first thing I was going to just ask both of you is why you chose this episode, the topic, the person, the book, whatever it was that drew you to this episode and made you want to volunteer to be hosts of this particular episode. Gabriele, why don't you start?
Gabriele Josephs: Sure. So, as I said, I was a Fellow for Professor Citrin, and that got me interested in thinking about how exactly we handle privacy matters, especially in a world where tracking is ubiquitous and in a world where we may not think about the inputs that go into certain systems. So, for instance, advertising. And the famous saying is, if you can't think of what the product is, you are the product. Well, I think it is useful, and I think Professor Citron is doing incredibly useful work thinking about how it is that we protect or create these walls around what it is that we are willing to share with technology companies and what it is that we aren't. And I think hearing her insights is incredibly profitable for the class and also for the world.
Aamina Mariam: Amina, what about you? Yeah, so like I mentioned, privacy in general just wasn't something that I thought about deeply prior to this. And it was something I was really curious about because I knew it was something that I wanted more education about. And something I really appreciated with Danielle's book is that she relayed the information in a very simple way that made a lot of sense and had a lot of impact, which isn't necessarily something we see a lot of the times with law school readings that are very complicated and intricate. And Danielle's book just really clearly stated everything that we needed to know and highlighted all the issues that are a part of the topic. Yeah, and I think that that is.
Kim Krawiec: Something that has characterized her entire career, and it sort of shows in her writing. Right. She's been very involved in not just academic research, but in addressing these issues with state legislatures and with Congress and advising companies. And I think it shows that she's been very deeply steeped in sort of the real world attempts to control these problems. So we've talked about why you chose this episode, what you liked about it. Let's talk a little bit more about what you hope to learn from her during this conversation, especially since both of you, and especially Gabriel is already familiar with her and some of her work. So, Amina, why don't we start with you? What else are you hoping to learn from her and want to ask her during this session?
Aamina Mariam: Yeah, I'm really curious to see her responses to a lot of our questions, because I think they aren't just hypotheticals, but they kind of take the issues one step further and we discuss the real world implications of privacy and proposed legislation and all that. And so I'm really interested to dig deeper and see her thoughts on the subject.
Danielle Citron: Yeah.
Kim Krawiec: And Gabriel. What about you?
Gabriele Josephs: I think on a micro level, a personal level, I want to bring up concept of privacy in the context of the things that we've learned in this class, in the context of all of those things. That what the analogues are between privacy and some of the other topics in the class and where the differences are whether or not trading in data is substantively different in ways that we should appreciate as legal thinkers and legal scholars from some of the other topics that we've dealt with in the course of this class.
Kim Krawiec: Thank you. I really like that point, Gabriele. It's a nice way of sort of situating this reading within the other readings that we're doing this term.
Danielle Citron: Great.
Kim Krawiec: Let's join the others. There she is.
Danielle Citron: Hey, there.
Kim Krawiec: Hey, how are you?
Danielle Citron: I'm good. How are you? Look at these lucky ducks in your class.
Kim Krawiec: Lucky because they've got no, no. Thanks for doing this.
Danielle Citron: I'm thrilled. I'm so happy to be here.
Kim Krawiec: I'm going to turn it over to these two. They're in charge now.
Danielle Citron: I'm in good hands. Hi.
Aamina Mariam: My name is Amina. I'm a 2L. I'm so excited to talk to you about this. I really enjoyed your book.
Danielle Citron: Thank you so much for reading and for having me. Yeah.
Aamina Mariam: So my first question is kind of about the treatment of celebrities. Society has really begun to reflect on its treatment of celebrities, and it's kind of prompted the topic of whether those in the public eye kind of waive their right to privacy. The argument is kind of framed as an assumption of risk. This person knew that a career in mainstream media came at the price of their privacy, and they chose that career anyway. And then once they become a celebrity, they never really get to regain that right to privacy, even if they step away from the career. And the argument sounds very similar to the victim blaming narratives that you mentioned in the book, such as the principal who was fired after failing to keep her laptop protected from her vegetable ex. I was particularly troubled by the story of the former u. S. Representative Katie Hill. And I was kind of disappointed to find out that she lost her legal battle against the journalists who leaked her photos because the court held that the media has a first amendment right to publish newsworthy information about elected officials because it's in the public's best interest. And so I just kind of wanted to ask, are we inappropriately treating, like, things differently? For example, Katie hill did make an effort to conceal her photos, but the court is treating her as if she waived her right to privacy just by running for office. So is there any reason that celebrities should be treated any differently than the average person?
Danielle Citron: I think one thing is important to level set and recognize is that when we think about defamation and reputation harming lies and we say celebrities, like public figures and public officials, have assumed the risk of having lies told about them. And therefore, if they are going to bring defamation suits, they have a stronger burden to bear, which is to show that they knew or that as the press knew, or consciously disregarded the risk, that it was a lie. Right. And what's interesting is, by contrast, the privacy torts that even in the restatement second of torts written by William Presser recognizes that even celebrities have a right to what I call sexual privacy, that it's explicit. In the restatement is the notion that when it comes to things like a sex tape made for a couple just for themselves, that they have a right to intimate privacy and if it's betrayed, that they can then sue in the privacy towards. And you're right that when Katie Hill sued daily mail. And I think it was like red state. It's a right wing website for invasions of intimate privacy that the states California's anti slap law not only justified dismissing the lawsuit on the grounds that it was like a strike suit, that she was suing to avoid people. From talking about a matter of public interest. They also awarded attorneys fees so that Katie not only had to suffer an intimate privacy violation at the hands of these two news outlets that then made money, of course, by all that traffic, all the data they collected and advertising traffic. So they got to monetize it and then make money, then their attorneys fees were paid. So she's down like something like $250,000 in the hole for this lawsuit. And you might ask like, okay, Danielle, how do we hold those two things in our head where prosser tells us in their statement of torts that celebrities have a right to sue for privacy, torts for things like a sex tape because it's not newsworthy. On the one hand, right? And on the other hand, why can't Katie Hill, representative hill, do when her ex posts nude photos of her and her lover? I think what made her case more complicated and why I think it's not always true that celebrities have no sexual privacy. But what made her case complicated is that I don't agree with the outcome at all. You could tell from my book, I think it's the wrong decision, right? I think we weigh newsworthy, NASA and pride to see we strike the wrong balance there. But the justification of the court was that if you look in context the pictures themselves one of the photos had featured Representative Hill smoking a half naked, and the other had her featured with a former person who worked on her campaign. So that's her lover was someone who had had a polyamorous relationship with she and her husband. Right. Nothing wrong with that. Totally their business. Her ex betrayed her trust and shared the photos with these two websites after she asked for a divorce. Now, the court's justification was it's newsworthy because voters should know a, that she's smoking pot. They didn't need to see her naked for that. Okay? But b, that she was having an affair with a staffer, a younger female staffer. Now, we can, at least in context, understand the argument of newsworthiness, right, that it is that voters have a right to know about the character of the people running for office. In the same way know, you think of why Anthony Weiner the photos of Know crotch shots. Sorry, forgive me, I talk too easily about this stuff. Well, this is Taboo Trades, so of course you talk about things like this quite easily. Yeah.
Kim Krawiec: Welcome to Taboo Trades.
Danielle Citron: I love that's why Kim and I get on so well. We're like, you know, when you take a class with either one of us, it's all hitting the I would say.
Kim Krawiec: I would say this is nothing, except that we all agreed that in some ways we were most depressed by the stuff in your book so far, which is not a criticism of the book, of course. It's just that it's a depressing topic and you're so cheery.
Danielle Citron: Yeah, we're fighting the good fight. Right. But it's interesting because you think I've argued that Anthony Wiener's crap, know, first of all, I don't want to see them. And I've said in other work, marianne Franks and I have argued, like those photos, we don't need to see them, we can talk about them, we don't need to see them. But one can understand the argument, right, that as we think about the media and its role in helping us figure out the values of the First Amendment, which includes self governance, right, that we're figuring out who we want. To vote for and the kind of world we want to live in and the kind of politics that are our thing or not and the people we want to vote for and we want to entrust, to bring our concerns and represent us as constituents. You can at least understand the reasoning, though. I hate the attorneys fees. Right. I'm going to totally disagree with them, but I can at least see why, as someone who's in office, I don't think we needed to see the photo, but I think the press should have certainly been able to write about it. Sleeping with a staffer, smoking a bomb, you know what I'm saying? And this is all before I think it was legal in that, but that right. I don't think we should have seen, nor been had the photos published. Right. But it is true that Pamela Anderson Lee and Tommy Lee, when they were married, they made a sex tape just for themselves. And I guess someone like an ex manager got a hold of it and released it to a company that was selling porn online. And they sued the porn company and the manager, and they were allowed to sue on the grounds that even celebrities, their sex lives are their own, and it's not newsworthy at least to see the video of them having sex. Right. And so I just want us to be there's so few things that we have privacy rights to, including a privacy tort, right? Actually, sex tapes is one of them. And that's especially true for celebrities. That is as a legal matter, right. Courts are not going to just immediately say, too bad, so sad, because you're a celebrity. But as we've seen for public officials, that's not the case, like with representative Hill. And I think, crucially, it's just practices. Like, forget law, because law so often, as much as I'm a law professor in love with law as Kim does, we are lawyers. This is our project. We want law to work, right? But what we see is the 80,000 deepfake videos that are online today, 90% of them are deep fake sex videos. And I think it's something like 95% to 96% of them are deep fake sex videos of faces. The women's faces are marked into porn. And so many of them are celebrity female celebrities, though not all, but from every place of the world. Like, I think the biggest are India, the United States and South Korea. So it's female celebrities, their faces from those countries, like being morphed into porn. And so the practices of non consensual intimate imagery absolutely target celebrities, right? Like in the fappening, jennifer Lawrence and others whose photos were hacked and know shared all over the deepfake sex porn starts with a reddit subreddit named Mr. Deepfake, and it was created to have celebrities morphed into porn, deep fake sex videos. And so the practice is, in practice, we say celebrities have no privacy. Does that make sense? Like, I'm agreeing with the heart of the question, which is, do celebrities have sexual privacy? And I think the answer, as a matter of practice, is that for the most part, no law would in certain narrow circumstances, and especially if it's a sex video shared, like, made by celebrities only for themselves or nude. Images that we might recognize that they have a privacy tort or that prosecutor, if they even cared. Law enforcement, if they paid attention, could bring a criminal case. But more often than not, there are no torts being brought, right, because we have no deep pocket. There are no crimes being pursued. Even when it comes to Jennifer Lawrence. Now, you might have thought, like, how was her hacker and how was all the posters? No one was prosecuted. And the answer was no one was prosecuted. Even though you had the attorney general was Kamal Harris. She was deeply interested and invested in the issue she called cyber exploitation. And yet still right. And so practice and theory don't often meet. We think celebrities would say, like, in theory, I might have a right to sexual privacy that law could so protect. But public officials, it's a harder argument, much harder argument to make, and they really don't, right? Like, everything seems to be newsworthy, right? Because you have and in some sense your theory of like, assume the risk, in some sense because you are running for or are in know, you do in some sense assume that risk.
Kim Krawiec: I wanted to ask you, and I think Gabriele is probably going to follow up on this, about the possibility of biased judgments regarding what is newsworthy or in the public interest, especially. So in particular, my prediction would be that courts are more likely to rule that female sexual behavior is more relevant to their fitness for office or employment than male behavior, and that that would be even more true for members of the gay or trans community. Is my instinct right on that, or am I wrong? Is that something that's worth worrying about or not?
Danielle Citron: One thing we can borrow from. We don't have enough, like, what do they say you need in a sample set? We don't have enough lawsuits, right. To be able to say we definitively see that bias play out, that women's sexuality is deemed relevant or newsworthy, and therefore you can't sue for a privacy violation, whereas for a man, because of gender norms, it's not newsworthy. Right. And what I can tell you kind of, I think, two things. One is in practice, remember Representative Joe Barton of Texas, kind of older? He was, like, in his 70s when this happened. Okay, so he's early to the privacy game. He's why we have financial privacy embedded in the Gram Leach Blyley Act. So he's, like, a big privacy promoter, has long been understood as, like, a big. And in Congress for 30 plus years, he shared a sorry crotch shot with someone he was sleeping with, not his wife. And the photo was shared, never published. Just saying. Never published by any outlet. And he resigned, and I think in large part not because of pressure for him to resign, unlike Katie Hill. Just think, like, he'd lived it like he was in his late 70s. He was just like, I'm calling it republican from Texas. Maybe just didn't think it was worth running anymore. He made his bones in Congress and as a privacy person on the ground, said, we need financial privacy because my credit union is selling my data so that I'm getting Victoria's Secret catalogs in my house in Capitol Hill. That was his argument in 1988. So now it makes you wonder totally what his real story really was. But he resigned. Right. Katie Hill has run out of cockrail. If you read her book And She Rises, there was just no room for her to make her own decision. So in that respect, I'm going to think I think we would see that in the courts. And we've also seen because Anita Allen has done such important work about how LGBTQ plaintiffs and privacy, like their claims, are so often not recognized, sometimes on the grounds of sometimes, like, we have some critical cases. We have some great examples of the outing of someone not being recognized as a privacy tort because it's deemed right. Right. And it's truthful information about this man, Oliver sepples like a classic. Think you're? I think who. He defended president Ford and ensured he wasn't shot in an assassination attempt. But the San Francisco chronicle released to the world that he was gay on the grounds that how shocking a gay know was brave or was a hero and his parents didn't know he was gay, and so he brought a suit like his friends in san Francisco knew. And the court said, you can't sue for privacy violation because it's newsworthy that a gay man protected a president from harm. Your intuition is mapped by some not the specific question of what politicians do for sexual privacy invasions. Like we say, the female. It's newsworthy. It's not for the male. But I think we can make an inference from cases involving LGBTQ ordinary people caught up in events who more often some of their sexuality is deemed to be newsworthy right? As well know just the way in which the treatment of Katie Hill versus Joe Barton.
Gabriele Josephs: Thank you so much, Danielle. Nice to see you.
Danielle Citron: Thank you so much. Nice to see you, too.
Gabriele Josephs: So I want to continue on the track of disparate treatments of different groups in society. And I was struck by your examples of how differently we meet out shame for sexual conduct of men versus women. And I think the examples of Katie Hill stand out, the examples of the Paris mayor's masturbation tape that stands out as well. But I am saddened by the prospect that maybe the problem is not institutional but individual, that the Internet merely amplifies what was already human nature and changing human nature, changing how we use the Internet, rather, would only serve to hide that painful sore rather than cut it out. Do we have to rely on the private sector to design some sort of interface that's respectful of privacy? And would that even matter?
Danielle Citron: Okay, so let me just you use the term human nature, right? That might seem sort of natural in what we do, having nothing to do with external social norms, right? But our demeaning, our humiliation, the shaming, our treatment of women and minorities is different from the heterosexual white male, right? That's all gender norms and race are hit. They are structures. It may be operationalized individually, but like, Sarah Ahmed has this brilliant book called The Feminist Life, and she explains that discrimination, structural. Your face hits a right. Like we embed those racial and gender norms in us, right? As you said, well, it is individuals operationalizing that structure. Right? That is when I first wrote about online abuse and cyberstalking in an article called Cyber Civil Rights in 2007, concurring Opinions, my Zen blog. This shows how old I am, what do they say? A symposium about that article. And the interesting thing was a lot of men responding to the article, which was about the cyber mobs stalking women and people of color online and how sexualized it was, how racialized it was, right. Sexually threatening and sexually demeaning. Do you know what I'm saying? Wasn't just a threat. It was a rape threat or anal know, anal rape, that kind of thing. It wasn't just a lie. It was you're a prostitute or sexual, you have a sexual disease. And one response that was really interesting was from James Grimmelman, which was, the reason why people are making sexually demeaning or sexually threatening comments is because it's what makes women who work online this is 2007 vulnerable. Like what makes them vulnerable in a society in which rape is pretty normalized? Much more so. Men get raped, too, but they are raped by other men for the most part, right? Is what makes it normalized is us, as you said, right? We know we make you vulnerable. Woman if we shame you based on your sexuality, if we post your nude photo, right? We know that you're vulnerable and society will count you out as a result, right? That's how we hurt you. You hit people where they can hurt if you want to hurt them, right? And so the idea, right, these individual perpetrators are hitting people where it hurts. Almost makes it seem like a pathology of just people not liking each other, as you're saying, right? But really it is. What we're doing is embedding a structural problem through norms, right? And that's not to say women don't imbibe those normies either. They do. You know what I'm saying? Like, we will have female perpetrators with visa vis, non consensual pornography. And when law enforcement actually acts in cases involved, I promise I'm going to get to your question. Amazing question about design in a second, but just want to say that when law enforcement acts, guess what? Who do they prosecute? Women and minorities for non consensual intimate imagery, right? Like famous Illinois case goes all the way up to the Supreme Court, our laws upheld is constitutional. Who is the defendant? Bethany Austin, who targets her ex's ex girlfriend with non consensual intimate imagery, right? So who's being no one's prosecuted, but who's prosecuted if ever, right? Are people of color and women. Do you know what I'm saying? And guess what? They're not the perpetrators for the most part, right? We have studies after studies that the perpetrators are mostly male and they're more often white men. Do you know what I'm saying? So it belies reality what's happening, right? But you asked a great question, which is, if we know that so much of this is in us, right? Like the destruction is in us, right? Whether it's I don't want to call it human nature because I resist that it's our nature, right? It's socially constructed baloney, right, that we got to rid ourselves of, right? But you're right that it is individuals like the Internet just brings out can bring out the best and the worst than us, right? And that should we be focused on questions like the design of technologies that anticipates the destruction, right? And then we'll head it off on the path. And I think that's exactly right. That is technologists need to sit side by side with the privacy policy folks and companies before they build these tools, right? And think of ways in which we can design tools not to prevent speech and speech activity, but maybe we slow you down before you can do something. Maybe when it comes to an image, you need authentication. And I think design is and technical design is precisely right. And I guess one of the reasons why I focus on Section 230 in my work is because right now, content platforms and online services have a free pass and they don't have to internalize any of the harm that they externalize, nor do they have a reason to design their tools in a different and so, you know, my reform proposals are focused on intimate privacy violations and cyber stalking and ways that we incentivize companies to take reasonable steps that involve the design of technologies. So I hope that answers Gabriele, I hope that answers your wonderful question.
Gabriele Josephs: It does. Thank you so much.
Aamina Mariam: Well, I think up next, Jenna has a question about spyware.
Danielle Citron: Oh, gosh, spyware. Yes.
Aamina Mariam: Hello. Thank you for coming and speaking to us. I think you've queued up this question super well with your response to the last question. So you write a little bit about the use of spyware by domestic abusers against their partners. I knew that law enforcement had that kind of technology, but I had no idea that there was kind of commercially available spyware that anyone could use me. It feels like the companies offering these kind of tools are aiding and abetting domestic violence. Are there any legitimate uses for this software, or is there any way to punish companies like this for the way that they enable domestic violence?
Danielle Citron: I love the question. It's a great one because I was just on the phone with some folks at the FTC about theories of means and instrumentalities of unfair practices, and can we use that theory right under Section Five for bad actors, let's just say? And of course, spyware is a perfect example of those kinds of means and instrumentalities of privacy violations. Period. The end. So one thing that is interesting to note is that we have a federal wiretap law, 18 USC. 20 511, which basically this will kind of knock you over because you don't often think of laws dealing with visa vis criminalizing enterprises, right? But the federal wiretap law says that if you primarily design a product or advertise manufacture or sell a product that is primarily designed for non consensual wiretaps or non consensually capturing audio digital conversations, then you can face criminal sanctions. And that law has sat on the book enforced once in the Stealth Genie case. And why your wonderful question gets it why? Which is it's very hard to show that services are primarily designed for mischief because they always say, oh, parents are using the spyware, employers are using the spyware. We have a legitimate use. Right. Jenna, that was your question. And so when I first started writing about spyware, I had wrote. A piece called Spying Inc. In Washington and Lee Laravey in 2015. So I was studying spyware in 2014, and as I was studying the practice, the ads that they like, if you do cell phone spy, you will find pages and pages of ads for companies like Mspy and others that hawk these spyware stalkerware. And at the time, their advertising was so on the face, it was, is your bitch girlfriend cheating? If she's cheating, download our app. Do you know what I'm saying? Like, literally, it was a picture of a man holding a woman's arm. Do you know what I'm saying? There are pictures of abuse to advertise the spyware. They weren't holding anything back. What happens two years later? There's that one case, Self Genie, where the DOJ prosecutes for a means and instrumentality of primary. Your whole business is non consensual wiretapping, right? All of a sudden, all the ads I I went from having I thank God I printed everything from my first article from Washington, and Leah had, like, printouts from that law review article. So I wrote the fight for privacy. I was like, they changed their ads 100%. It was like, parents and employers, you can watch your kids. The bottom line is, no one should be doing it secretly anyway. If you're going to put you shouldn't put spyware on your kids phone, you should tell them you're doing it. It should never run without detection. That's the whole point of spyware, right? Where's my phone? You can't see it in the memory of your apps. It won't appear on your app. It is cloaked by design. Right? So that's why I always find it nonsense. The argument that we're not primarily designed for non consensual wiretapping, we're used by kids and parents and employers and employees called bullshit. Sorry. You know what I'm saying? Like, you just changed your ad campaign. Right? And we do know that Kafirsky, like, the security company, as well as the National Network to End Domestic Violence, we've, like, studies, and we also have feedback from advocates that, like, 60% or 70% of domestic abuse cases, when you call into an EDV's hotline for domestic abuse, the person will say, the victim. I know there's spyware on my phone because my ex shows up wherever I am. He quotes conversations I've been having and the searches that I'm doing. How could he pop? You know what I'm saying? There is a strong reason to suspect that there's spyware on their phone. So, unfortunately, we have, I think, this effective federal law. That is, we have a law, I think we need to fix it. So in my smaller works, I've argued and proposed ways to change that law. So the primarily designed, like, primarily useful for that is far too strong a standard. Right? I think that we need to assess and adjust that statute just because it's so, like, you talk to prosecutors and they explain why they never use it, and they say it's because it's too hard to show that it's primarily used for especially when their ads, as you said, Jenna, really? Well, their ads are, oh, we're totally legitimate biz. Now, I know we seemed illegitimate in 2014, right? It was so on the nose. Right.
Kim Krawiec: Just to clarify again, the legal standard is the intent in creating the product or is the question what is the primary?
Danielle Citron: It's primary. Not there is a knowledge standard. There is a knowledge standard, but really, where prosecutors will explain they get hung up is because these companies can point to legitimate uses, which gets know Jenna's question. They then push back in any investigation by saying we're not primary. I mean, people are bad, we can't control them. That's their response. People are bad. We're not primarily used for we're not meant to be primarily used for nor do we let me show you statistics. Parents are using this device, but when we design the product, when we advertise the product, it's meant to be used for legitimate reasons. And my pushback is it's cloaked. It's by design, deceptive. You can't see it's on your phone, right. If you're doing it for employees and kids, they should know too. Right. So it's a great question about knowledge and practices, like what's the mens ray and what's the act. So it's both knowledge and then it's primary use. It's both. You have to show that, you know, it's primarily used for and that it is primarily used for and I think prosecutors just like, throw up their hands and don't do anything about it. There are like more than 200 companies that unfortunately, basically, if someone has access, physical access to your phone and your password, they just need 1 minute for you for them to download the app, and then you don't know it's running on your phone.
Gabriele Josephs: Thank you so much for that, Danielle. So you mentioned much about technology's evolution and how it's creating new and difficult and thorny problems. So I want to spend some time on that. And in this section, we'll be focusing on artificial intelligence and deep fakes. You mentioned some of the legal differences that are possible, and Dennis is going to try to home in on what those legal differences are.
Danielle Citron: Great.
Gabriele Josephs: Yeah.
Kim Krawiec: Thank you so much for doing this.
Danielle Citron: It fun for all my students to turn the tables on me now, Danielle? You know, right. I hope I'm enjoying it just with them.
Kim Krawiec: I am, too. Glad we're in the same boat.
Danielle Citron: Yes.
Dennis Ting: My question was, I've been reading a lot more about and you mentioned in your book as well, just this rise of deep fake pornography, AI-generated pornography, where you take someone's face and you can put in a program and it creates a realistic image. And if we look back like 15 years ago, 10-15 years ago, the big issue was things getting leaked through a cloud or you take your laptop to a store and someone pulls all the files and spreads it. So now we have this new generation of non-consensual pornography. And my question is, is there a legal difference between these two, between someone who goes to who has their personal photos leaked through a hack of a cloud versus someone who has their image used to create these photos? They both are clearly not published or publicized with consent of the subject, but there are some kind of differences in how they're made. So how should the law treat these two types of invasions of privacy?
Danielle Citron: So I'm glad that you accept the premise or it seems like you're accepting or my premise that they're both invasions of intimate privacy because on the one hand, privacy towards have the false light, toward anticipates the notion that your identity would be cast in a false light and distorted. Your thrust upon you is a distorted image of yourself, and that's a privacy invasion. But so often when I talk about deep fakes, the response is, well, you're talking about defamation. This is distortion and an out and out lie. That it's. You saying and doing something that you never did and said, and that's defamatory. My response is, it is defamation, but it is also an invasion of intimate privacy. That is, you're exploiting someone's body. You are turning it into an object, and you're exposing it, exploiting it and exposing it. You're violating the intimate. And, you know, you talk to victim for victim. So when I interviewed Randa Ayub, who's a journalist in India, she was in the United States. We sat together, and we looked at the deep fake sex video clip of her that was literally on nearly every phone in India. It looks like her. It's Rana. And she said to me when she first saw it, it felt like a punch in the face. She vomited. She was like, there are now, I know, millions of eyes on my I said, you know, we kind of talked through is that it's not her body. She knows it's not her body, but she experiences seeing that video as her body. And crucially, as she said to me, no one knows it's not mine. You know what I'm saying? That I'm not engaged in that sex act. Right. And, Gabriele, you noted talking about technological change and how you framed Dennis's question under that notion of, here we are amidst wild and rapid technological right and what we can do in the violation of privacy. And it's true. Dennis focusing on deep fakes. We now, just as is always true for technologies in the digital space and really, most technologies is, like, especially digitally, you can do things so much more cheaply and easily and at a different scale because storage is so inexpensive and we're so networked that the harm is so outsized visa vis harms of the past. So in the past, you could create a fake video of me having sex with someone because we had tools of fakery and honey fried's written a whole lot about fake photos and other kinds of distorted media, right? And you could take that DVD and this is an early intimate privacy case. Take a DVD of me fake of me having sex with someone, and you could put it on people's cars, right?
Aamina Mariam: Okay.
Danielle Citron: 15 people get the video. Like an early case involving intimate privacy, right? Now, you're not making DVDs, which take you time and money to make and energy and time and to put on every opportunity to put on everyone's cars, and you can only reach so many people. Right now, the either real or fake video of me goes viral, right? And if there is attention in an attention economy, if people are paying attention to it, like in Rana's case, I think she found that the reporting was on half the phones in India. That's millions and millions and millions and millions and millions of right? So and she was really well known. She was like credible human rights, exposing investigative journalists in India. So it changes the calculus, right. And law in the United States, what's interesting about deepfakes, it's almost easier to regulate, though. We have succeeded in intimate privacy violations, right? But because it's defamatory, when I make and talk to lawmakers, we talk and frame it as a digital forgery. So I've been working with folks on the hill, house and senate to criminalize digital forgeries that are harmful and defamatory, right. That you might say, okay, that's not that complicated. Right. Meaning we absolutely have civil penalties for defamation. Right. And if you're not talking about and this goes back to amina your initial question about how do we deal with public figures and public officials, we're talking about defamation, right. So long as it's not done with actual malice. If you're an ordinary person, you just have to show negligence for defamation. Right. So law does treat defamation and intimate privacy violations. Like, they're both not easy to regulate, right? But at least defamation, we have a clear playbook. We are making the playbook when it comes to intimate privacy violations, because the laws that we've drafted, we went from two laws, three criminal laws in the United States in 2014. There are now 48 DC. Guam, and Puerto Rico that criminalize the practice of the non consensual intimate imagery. And the laws that we help craft, five have gone up to the states, their state's highest courts, and in all five cases, the challenges were the laws were deemed constitutional. They got through what the crucible of strict scrutiny came out the other side because we crafted them narrowly enough, right. Even though the ACLU screamed bloody murder, that they were unconstitutional. So we can grapple with both of these problems. But you're right that deep fakery involves defamation. That for a lot of first amendment scholars. You talk to them and they say, you know, it's not that hard for me to think that if you know what you're doing with a deep fake is showing that the origin is false, you're not debating the content. You're just saying, look, I didn't do and say what you say I did and said. Right. So Helen Norton, who's a professor, free speech professor at Colorado, has always advised me, like, why are you freaking out about regulating deepfakes? You can do like, the source of the complication isn't a debate about whether I did and said. It's whether it is me doing or saying something at a particular time. She thinks that's less difficult from a free speech. You know, right now, besides defamation law in the United States, we don't have many tools to deal with deepfakes. There are five states that criminalize the practice of deepfake, and in particular, mostly deepfake sex videos and deep fake sex content. New York apparently just passed a bill, and so folks in the White House gender policy Council want to know if we think it's okay. I don't know. I haven't looked at it.
Aamina Mariam: Great. Well, I think the next question is going to come from Julia about identification and facial recognition technology.
Julia D'Rozario: Thank you, Amina, and thank you so much, Danielle, for being here. I have a question about the possible intersection between digital voyeurism and facial recognition technology. I've personally been filmed in public without my consent, and it's happened to a lot of my friends before without their consent and without knowledge, because I was unaware of what was happening until people nearby noticed and approached me and advised me to move. And this is kind of scary enough without the filmer having access to my name or any other identifying information. So the cases described in the book, like the hotel employee who was able to access name, email, and other data, are totally terrifying. It occurs to me that facial recognition and powerful that given the right tools, an anonymous privacy invasion, like a stranger filming you on the train, could quickly lead to more dangerous outcomes for the victim. And I was wondering if you could envision a world where data from facial recognition is sold and becomes widely available such that privacy invaders can use reverse image technology to discover the identities of their so called anonymous victims.
Danielle Citron: I think we already live in that world. Image reverse search. There are facial recognition tools that are available on the market, though clear Water AI. Sorry, clearview AI now has settled with plaintiffs and now agrees only will be sold if there's express consent. It's a facial recognition, like the biggest company, but who is their primary client is, of course, law enforcement all across the country. But Clearview AI isn't the only purveyor, and so your question is already a reality, sadly to say. Julia, forgive me. It's true. So there on the market are available facial recognition tools that you can use to reidentify people. And so that's like Pim's eyes? No, that's a website. If you go, you can do that. So I hate to say it, but we're already in that know, our facial recognition tools are widespread and sadly, right. You know, the response to someone walking on the street is that you've given up your privacy. We have this false dichotomy between being public and being private. We're so rigid about how we view public and private, as if there is a clear line and golly, it's not clear. Right? Because there's so many times where we're in public spaces where we expect we want and we deserve privacy, right, but we don't get it. And that's true of upskirt photos. You know what saying like, there are ways in which, Julia, you didn't want you're walking down the street, you didn't want someone taping you and then, who knows, distorting it or just even like it's in context. But it's not what you know, we often presume that because you're outside or in a public place, place that the norms are, it's considered public, right? Like a park, a concert, a store, even if it's in areas where we'd say, like, not up my skirt, but if it's just your face. As a society, we like kind of easy choices, unfortunately, and we're not nuanced. And we would say, you're walking down the street, there's no privacy. So there is kind of like always having privacy. Class, I don't know if Dennis will tell you this, but I have these depressing moments when we're doing the doctrine, right? Like, Kate, I'm not sure if our class we did that, you know what mean? Like so you know where I have to explain that unfortunately, law isn't nuanced, it's the bluntest darn tool ever, and especially when it comes to privacy, that we just have this automatic as if it's so easy to tell public from private, and that means that it eats a lot of our privacy up.
Kate Granruth: Hi, Danielle. Feels weird calling you that. Like I said, I was in class, but I have a question on sort of the uses of deepfake technology. As you mentioned, deepfakes that place a specific person into a sexual context that they weren't in are clearly violative of privacy. And I find any use of deepfakes to be unnerving at all. But I'm curious if you think that they are inherently violative or if there could be any pros to their usage. I could see a situation of deep fakes making industries like the pornography industry much safer by not requiring actual actors on set. And I had a sort of strange law school debate with a friend about if deepfakes could protect real world children from being victimized in child pornography production, in the sense that if it's going to be produced no matter what, we might as well make it wholesale fake. Do you think this is realistic at all, given the current view of the technology?
Danielle Citron: Okay, so on the one hand, there are definitely, like, pro-social uses of the technology that is synthetic video and audio, right? That is, it synthesizes video and audio. So you think about Leia in the movie, you know, Carrie Fisher had passed away and because she had permitted her image to be used, she was like, I think in the last movie. So number her, it was fake. It was synthetic imagery of her talking at the end of that movie. So there are pro social uses and there are folks who have ALS. Their voices can be recreated and then heard. So we're using synthetic audio and training that audio on prior recordings of your voice. Right. We're doing the same interesting historical movies. So MIT did a really cool project where they imagined if the challenger had blown up and what Nixon would say. And I was like, I'm in this movie about deepfakes having created a deep fake. And what would have been like if history was different. And you can use deep fakery for those interesting questions. Right. But it's really not deep fakery, right? Like you're doing it saying it's fake. Right. The problem with unauthenticated synthetic video is that the idea is that it's fake by it's not supposed to be detectable as fake. It's passing itself off as inherently deceptive. Right. The business is deceptive. Right. And unless you're using tools, video and audio that are making clear that it's fake, that it's synthetic, then it is deceptive as a business. Right. So I think there are uses of synthetic video and audio that are pro social. Absolutely. Right. The problem is when you're using these tools to deceive and to show people doing and saying things that they never did, whether it's a congressperson, it's a business person, it's a sex worker, right? Now, you asked what if we could eradicate child sex abuse and therefore eradicate the creation. Why the Supreme Court says we can regulate child sex exploitation material is because it involves sexual rape of children. Right? And that's Corda said synthetic videos of children are protected speech. Right. And on the one hand, we're never going to get rid of child sex abuse. I hate to say it, I just don't think we're going to totally eradicate it even if we so permitted child sex abuse material that's fake. Does that make sense? We already have fake child sex abuse material. You know what I'm saying? The thrill is that it's real. It's so ill people are so for my next book project, it's going to be with Honey Fareed who we're going to write about kids safety and privacy. And he's like the father of PhotoDNA. He's why we can detect CSAM. It's because he and Microsoft created this tool of hashing technology that's used around the globe to prevent the reposting of child sexual material. And as Hani explains, the problem that we're having is that it's not used often. Like we're doing a poor job on execution of dealing with these hashes. And so so often it's like companies are not doing a good enough. Like it's not that we can't figure out the tools or you have just production of more, do you know what I'm saying? That then isn't caught in a hash that isn't prevented and filtered and blocked.
Kim Krawiec: Are you saying that consumers are if we were to be open about the deceptiveness so that it's not deceptive, it's not as appealing to consumers. They want the real thing. They want real children.
Danielle Citron: Or was I'm spitballing there? Kim, do you know what I think? I guess what I was saying is because Kate's question was if we legalize, which I think it's already legalized, we legalize fake child sexual exploitation. Like that's safer than having real CSAM, right? And that maybe that's the way to go. And my response was we already have fake CSAM and I don't think it's going to dry up the market. That's not to say that I always try everything in my wheelhouse to do anything right. I'm a big fan of throwing every tomato against the wall. But I think law is so our teacher and I think saying that fake CSAM is okay is really problematic. That makes sense because the whole enterprise is the exploitation of children's bodies and that I think it only feeds the beast. It does not make things better. So as laws expressive value, to me, I would know sending the message that even fake is okay. Does that make sense? Like just in the way that I think it's an intimate privacy violation of the highest order to create a deep fake sex video of a grown up like of Rana Yub, let's say, or Noelle Martin or any other victim of deep fake sex. So for laws like Educative value and the message it sends and the morals that it conveys, the fabric of who we are is law, right? I don't want us to say that.
Kim Krawiec: But you would be willing to say it if it actually worked to save real children?
Danielle Citron: I think so. I don't think I can make yeah. This is still your class, right?
Kim Krawiec: We had a long discussion about this already, right?
Danielle Citron: Yes. And I'm probably going to say a stupid thing, but I want to dry up that market for sure. So if you could tell me, does that make sense? Like when you talk to survivors of child sexual abuse, you're never free of that. Anyone who suffered sexual abuse is never free of yep. And we're talking grown know, like young women who I've talked to, who their lives are. You're never the know, having read so many stories and Hannah and I are going to write about them. They're never the ever. Yeah. So I guess I want to prevent that harm. It's always with me, right. So I'm not dispassionate. Right. I feel like Kim, I can't give you an answer that feels cool headed and wise. It's not cool headed. I know that it's going to come from the heart. Right. That answer because I want to solve the problem of child sex abuse and it's a hard scourge to solve. You know what I mean?
Kim Krawiec: Yes, I do.
Gabriele Josephs: And now we're going to switch gears to putatively consensual disclosures and question how meaningful it is that we consent to disclosing our data. I want to start with a question about benefits from amina.
Danielle Citron: Yeah.
Aamina Mariam: So as you kind of mentioned, there's some information that we share that is beneficial, like sharing our health data to monitor blood glucose or things like that. My question is kind of where the threshold is, where we decide giving some of our data up is okay on a national scale. For example, ancestry DNA tests have recently helped investigators solve a lot of crimes. And I personally thought that having a national database of everyone's fingerprints or DNA could be helpful. But then we don't know what will become of that information that we share. There is going to be a database of our DNA and everything associated with it, including our health, and that could be shared with insurance companies. So I just kind of wanted to ask where that line is, where societal benefit outweighs the risk of a loss of privacy?
Danielle Citron: Great question. So I think our first big mistake when we're talking about consent and it's not your mistake, it's our meaning our companies in the US. Production consumer model, our mistake is to think of consent as a consent to everything. Like, I have consent to have sex with my husband. I love my husband. I'm not consenting to have sex with every person who's walking down the street. Right. Doesn't that sound crazy as I say it out loud? Right? The idea that we consent so sorry, he's not here. He's not objecting to me saying anything. Kooky that we take consent as a one way to consent to everything is to misunderstand the project of consent. Right. Consent is I consent if it's knowing, if it's meaningful, if it's fully understood. I consent to giving my data to Apple. I'm allowing Apple to have that data on the grounds that they're using it for the product they're giving me, which is my phone. I am not consenting. That is, when we think about our average expectations and attitudes, we're not thinking it's being sold to a data broker. At least that's my understanding of most of us, like societally, that we want, we expect, and we deserve privacy in our communications and our phones, and we don't think of all the services that we deal with as hawking our data. And the first thing is it's not meaningful consent because it's not on the basis of what we understand. Right. We don't really understand that. The deal is I give you my data and you can hawk it to everybody, including the government. Right. So the consent is ill informed is the first thing. It is misunderstood as a consent to everything, whereas we know consent in our everyday interactions is contextual, right? That is, I consent to one thing, but I don't consent to. And so, you know, Dr. Frank, Marianne Franks and I wrote an article about criminalizing revenge porn, and I think the example that we gave was like, I give a waiter my credit card, right? I say, hey, waiter, pay for my credit card. I do not give permission to the waiter to use my credit card at like, that's not what the deal was. And so I think it is true, though, in our consumer protection approach to data in the United States, that we presume that once data, it's given over to a party, that unless they lie to, right, in their privacy policy, we presume consent. We don't actually get consent. It's a myth, right? It's a myth, totally. It's broken the consent model in the United States. But because it's frictionless, we presume consent. We presume you can sell and hawk and use, right? There's so little friction. And who wins? Our right? Who loses ultimately? Our consumers, our individuals?
Kim Krawiec: I have a technical question about that. So are we literally presuming consent, or are people actually consenting, but in a way that's not sufficiently meaningful that they know what they're consenting to first?
Danielle Citron: That is, we presume that people read privacy policies that they never, ever look at nor have to click through, and because they have a privacy policy, we presume that they know and that they're accepting the terms.
Kim Krawiec: But the reason I'm asking you is because presumed consent typically has a different meaning. I'm not defending it as being meaningful. I just want to clarify, because in my world, presumed consent just means you have two default rules, right? Two possible default rules, right? One would be that the default is we presume that you consented to something unless you affirmatively opt out.
Danielle Citron: Unless you say no.
Kim Krawiec: Unless you say no. And the other one, right, is that we presume that you've said no unless you affirmatively opt in.
Danielle Citron: Right? When I we live in the same like, I'm with you because we have the opt in and opt out approach, right? That's like, one's European, and the other is the US. And the is you have to opt out. And you know what? In the US. You don't even get to opt. Like, even if I said to Google, don't collect my data, you know what they say to me? Don't use my like, there's no using a service and opting out unless I live in California or Colorado. Do you know what saying? Like, under federal law, there is even no way to opt out, okay? You know what saying? Like, that's how profligate we are with our data practices and how weak and vulnerable we are.
Kim Krawiec: So the rule then is that you have presumed to consent to the use of your data regardless of whether there's a privacy policy or not, or is the regardless most.
Danielle Citron: Apps. Like, for example yeah. The reason why we have privacy policies at all is due to California. Okay? California has an online God Bless California 2003, pass a law requiring privacy policies. And because it's cheap and easier, one California effect, as AG. Harris would say, so goes California, so goes the nation. Right. It's cheap and easier to have one approach, which is you have a privacy policy because California says so. It's the biggest market, right? So they have most behemoths like the tech giants. They've got privacy policies, but mobile apps, until AG Harris, when Kamala Harris was the AG, until she started bringing enforcement actions against the mobile app ecosphere, like 90% of apps, mobile apps had no privacy policies, and there was nowhere else except for California law that required it.
Aamina Mariam: Okay, so I think we're going to move to a news question about legal implications.
Danielle Citron: You've got it.
Julia D'Rozario: Hi, Danielle.
Anu Goel: Thank you so much for being here. So I was really interested in the portion of the reading in your book where you discuss the hyper surveillance of women and girls. In particular, I think about apps like Flow, which I used to use, and I understand how much information they store, but obviously they fail to protect that.
Danielle Citron: Information and they still fail to protect it. Shockingly post jobs, right?
Anu Goel: And in the wake of.
Danielle Citron: I had.
Anu Goel: Deleted my Flow account because I had gotten through just, like social media, a little bit of an understanding of what you go into in that section of your book. And given the fact that this app could possibly be used against women, should abortion be criminalized? I just didn't really feel like the benefit of being able to track my cycle outweighed, the hypothetical risk of my data being against me. But I was still a little confused about how an app could possibly know you had an abortion if you don't specifically enter that information into the app.
Danielle Citron: Right.
Anu Goel: And regardless of my confusion, I had also learned from social media that many women across the country were deleting these apps. So I still felt like it's important to delete it in solidarity because I believed and hoped that if enough women deleted their accounts, it would send a message to these companies that make these apps to encourage them to change how they treat our data and protect it. And now I'm kind of seeing abortion laws being amended across various states. It's even on ballot here in Virginia in November. And I just wanted to kind of bridge that gap in my understanding, like, what kind of data might we see be used against women in the near future and what that might look like. When I think about if a woman goes to another state to access an abortion, is there a way that that information could be used to prosecute her if she lives in a state where it's criminalized?
Danielle Citron: Great. So you're right that on Flow, you're not going to say you had an abortion, though some apps have asked for your reproductive history, right? But what it is, it's circumstantial evidence. You get your period. You don't get your period. You then fill in when you get your period. You go to an abortion, you go to a pharmacy and you get the morning after pill. You go to a clinic. It's not that it's one piece of evidence that says you got an abortion. It says it's true. For someone who murders someone, they're never going to say, I did it, I hated them. It's never easy. In any criminal case is an amalgam of circumstantial evidence and that you missed your period and didn't get it. And then two months, three months later got it again. And then there's other circumstantial evidence that shows purchase. It's often searches, searches on Google. How do I get abortion? Drug will miss a pristine work after X many weeks, right? That information actually when subpoenaed or purchased from a data broker or subpoenaed from a provider like Flow can be used often to then get a search warrant. And the search warrant gets your Facebook messages. I'm just taking particular cases and it's those Facebook messages between mother and daughter that discuss when they're going to take the timing of the abortion pill. So it is part of a scheme of part of board, like little dots of information that help you a get the warrant and then from there get your communications, right? And it's usually an amalgam of things. And what happens is it's not just the period tracking app, but it's in combination with a tip from a domestic abuse person. Do you know what I'm saying? So I have a piece coming out of Florida Law Review called Intimate Privacy in a Know Dobbs World. And so the information is going to come from all these different places and including individuals. And we know we have groups that are urging family and friends to report women and right. So we're creating incentives and social norms in some states, whether it's Oklahoma, do you know what I'm saying? The media ecosystem is different depending on where you look, right? And what the media that you ingest and who you're Facebook friends with. So I don't think it's one piece of evidence, right? Like your period tracking app. When I was first asked post the demise of Roe, what I thought, what advice would you know? Women and girls, what was I telling my own girls, my own two female daughters? What was I telling them? And I was saying get rid of your apps. Not because I thought it's going to be the slam dunk evidence, but because I think before we have these apps, they should be protecting our data. Do you know what I'm saying? Like I want us to use them all. They do make our lives better and easier. And I know they must help you with your like we always forget, right? Young people forget when they're getting their periods. It's so normal. I wish I had an app growing up. I was always discombobulated. But I don't want these companies betraying you, which is what they are. Right. They're selling your data to advertisers and marketers. They're selling it to Facebook, it's an advertising company. Right. They're then selling it onward to other advertisers and marketers and onto data. Right. And I can't tell you for, right. If you have terminated pregnancy in a state in which it's illegal at a certain point and you go out of state that they will not be, that is, in the state in which you live where it's illegal after. A certain period of time that they're not going to be going to any and all providers to get information about you, including location data brokers that show that you went to another state and another provider. Right. So I guess that's why I say sad don't use them because I think Mozilla did a study, right after, like six months after Dobbs, when period tracking app companies were saying, we're protecting your privacy, and Mozilla found that they were not. Something like 80 plus percent of these apps, 25 biggest apps, were not protected. They were selling the data.
Kim Krawiec: Were they lying, Danielle, or did they just not even know their own policies?
Danielle Citron: That could be it.
Gabriele Josephs: All right, so next we will have Jenna ask a question about privacy in school. We understand you have an upcoming article on exactly that topic, and she will chime in on that.
Danielle Citron: Yeah.
Aamina Mariam: So students nowadays typically take home school affiliate devices. Like, I have two younger siblings who are high school, have school issued laptops that are their main devices they use. So I know that bringing home these kind of devices presents a lot of privacy concerns for these children. So I'm just curious if you could give your thoughts on this, considering your upcoming article.
Danielle Citron: The surveillance date is now in our homes. That is like, students are under total surveillance in ways that most people completely misapprehend, and it's actually not even worth their claims of safety, right? Claims that it's going to make kids detect suicide. So just to back up for a second, sorry, Jenna, for folks who haven't read the article, the Chromebooks that are being funded by foundations and given to public school students, and many public school students don't have the funds to buy another device for home. The device that they get from school is their device, right? And it is being 24, 7365 days a year monitored by third party platforms like Bark and Gaggle and GoGuardian and safely. They're all companies that are startups. Some are more mature than others. Many have private venture funding, right. Their ideas to make money, and they're lucrative public school contracts. So it's like $6 a kid. But with school districts, with a lot of children, it's a lot of money. They're consistently indiscriminately and seamlessly monitoring everything someone does with their laptop. So you chat, you text, you search, you browse. The surveillance service by the third party is both blocking and filtering content that it thinks is unsafe. And sometimes that's really over broad. So it's blocking kids from seeing, like, reproductive health information outlets, mainstream press, as well as LGBTQ sort of related material is all being filtered out. That's on the one hand, on the other, all that material that's being sort of continuously and seamlessly analyzed. They're detecting what they say is or they believe is evidence of self harm, of violence, of other kinds of concerns. And what happens is that there are human content moderators that work for these companies and they are pinged every 5 seconds. And once they're pinged, it's really hard to assess on the ground, like what's going on. So then they'll send it to, during the day, a school administrator, or at night, the school resource officer or law enforcement. And what happens is, more often than not, the pitch is security, right? And it makes school administrators feel safer. The evidence is such that it really and truly what does it do? It catches minor infractions like cursing, and it leads to punishments of children and the outing of kids. It makes them less safe. It also leads to punishment over nonsense, right? And who gets more often punished? There are studies that suggest that it's more often minority students. And so it's not making us safer. It's a massive privacy violation that I don't think is worth it. It chills students from doing what they want on their laptops, you know what I'm saying? So there are all these interesting studies like CDT this summer has come out with some really interesting and EFF studies ACLU about the impact of the surveillance and making kids feel like criminals. Because the cameras aren't on the outside also at schools, they're on the inside. So what kids say is they feel like the criminals are us. You know what I mean? Like we're the problem, not outside invaders. You think of cameras. Where do we have cameras in our home? If we have them, we have them on the outside, like the ring camera, right? We're not watching ourselves inside our houses. So why are kids cameras focused inside, right? So this piece is about kind of the kind of introducing my conversation about student intimate privacy and why students need it all the more because they're really trying to figure out who they are, right? That's their job. My job is this job, right? Self development is not my only job. But when you're a kid, it's your job, right, is becoming a citizen. And so I have a new paper that is part of a very much longer project that I'm going to be doing on youth privacy because I focus so much on grownups my whole career.
Kim Krawiec: So I just want to make sure that listeners know that the article we were just discussing is forthcoming in Stanford Law of Correct, and I'm going to put a link to it in the show notes. It's currently available. Is that right?
Danielle Citron: Yeah. It's up on SSRN. Would love comments I've actually like. I tweeted about it, even though I've kind of given up on X, I tweeted about it and I got some really amazing comments, actually, from UVA Law grads I read your article and gave me amazing comments. So, you know, social media – go UVA. Right?
Kim Krawiec: Well, thank you so much for doing this. This was tons of fun.
Danielle Citron: Wonderful. Thank you so much, Amina and Gabriele, for being also hosting this fabulous conversation. And of course, to Kim for being exquisite scholar and teacher and having me and being my good friend. So I'm so lucky to be that.
Kim Krawiec: My only claim to fame, Danielle, that I am the friend of DKC.
Danielle Citron: You guys are really lucky. Continue having an amazing semester.