The German Ministry of Justice’s Guidelines for a Proposed “Law against Digital Violence” and Civil Society Response

By Todd Sekuler

Over the course of April, the German Ministry of Justice presented a set of guidelines for a proposed “law against digital violence” designed to enhance and expand opportunities for victims of digital violence “to enforce their rights themselves” (www.bmj.de/SharedDocs/Gesetzgebungsverfahren/DE/Digitale_Gewalt.html). The draft guidelines aim to address “notorious infringers of rights in the digital space” and seeks to assist in situations where the identity behind a particular social media profile is unknown. The blocking of an account, the guidelines stipulate, would be able to be requested of a court by persons targeted with violence or other infringement, but should only be considered if alternative measures are deemed ineffective and if judges see a “risk of repetition.” The current draft parameters stipulate that a profile should only be blocked for a reasonable period, and the account holder must be notified of any request their account be blocked and provided with an opportunity to respond. It also calls for the mandated preservation of incriminating data. The guidelines propose reducing legal hurdles for officials – and then, without any court costs, also any targeted persons – seeking the identity of a person behind an otherwise anonymous but illegal post or suspicious account by mandating that telecommunications companies (including messenger and internet access services) hand over user data, such as IP addresses and the names of the persons to whom those addresses have been assigned. The law will also ensure that users know who they can contact to have a statement deleted and to document a service provider’s knowledge, and that social networks have all appointed a domestic authorised representative in case of legal action directed at them in the country.

In response to a request for public feedback, some concerns of various civil society organisations were culled together by Chris Köver and Sebastian Meineck last week in German on the website netzpolitik.org (netzpolitik.org/2023/digitale-gewalt-acht-klaffende-luecken-im-geplanten-gesetz/), which I tried to summarise here in English for readers of our blog.

1. More money, staff and training for counselling centres

Civil society representatives insist that counselling centres, not the judiciary or the police, are the first and best place to turn to for help with stalking or threats – because they support and centre victims. Many centres in Germany are understaffed, underfunded, and with limited resources for training on digital violence, which has become an integral part of violence generally, especially in partnerships. In response, the association Frauenhauskoordinierung and the Berlin Anti-Stalking Project demand increased resources for counselling services, including for regular basic training on digital violence. Funding, moreover, must be sustainable and guaranteed in the long term. The non-profit Superrr also emphasizes the need for special counselling for different forms of digital violence. Rather than designated IT experts for each counselling centre, the group suggests that centres can draw on IT staff together. Frauenhauskoordinierung calls for nationwide contact points to assist with technical questions and the initial securing of evidence, such as is already implemented in Baden-Württemberg.

2. Thoroughly define “digital violence”

Digital violence encompasses various forms of assault, but the contributors suggest that a clear definition of the term is crucial for the development of effective measures to help victims. They contend that the proposed law is broad and unspecific, and includes support extending beyond digital violence, such as bad restaurant reviews or other critics, which could ultimately negatively impact other rights and freedoms. Moreover, a narrow definition that includes offenses such as stalking or doxing – when attackers publish private addresses in order to harm people – would sharpen the law’s focus, and they also point to overlooked examples of digital violence, including the use of mini-cameras, GPS trackers, and blackmail. The law’s focus on hate speech fails to address significant aspects of digital violence primarily directed against women, such as cyberstalking, unwanted contact, and the publication of intimate recordings. By providing standard examples of digital violence, they contend, the laws application would be enhanced.

3. Explore the true extent of digital violence

Sensitive to the fact that digital violence is a widespread problem that includes various forms of harm, the civil society representatives draw attention to the fact that there is a lack of reliable research on the issue, including about perpetrators, and that hate speech gets the bulk of attention. The Berlin Anti-Stalking Project and Superrr both highlight the need for more comprehensive data to identify vulnerable groups and develop evidence-based policies sensitive to intersectionality. While digital violence is often seen as gender-specific, the federal government acknowledges that other marginalized groups are also disproportionately affected. However, the commentators draws attention to the fact that there are no separate statistics or criminal offenses for digital violence. A comparative study on violence in the digital space is planned for 2024, but a targeted study on affected groups and forms of digital violence is still missing.

4. More staff and training for the judiciary and the police

Civil society organisations call for urgent help for police and authorities in dealing with digital violence. They report that officials are overtaxed, lack understanding, and fail to take victims’ problems seriously. The Frauenhauskoordinierung in particular demanded more staff and training to sensitively deal with victims, while the police need training to learn how to secure evidence quickly. They propose that the judiciary also needs more staff to speed up proceedings. That being said, representatives note that the police can themselves become perpetrators of violence, which renders its own needs of support, accountability and justice, and can discourage reporting, especially for people from marginalised communities.

5. Clearly anchor image-based violence in criminal law

“Revenge porn” is a form of digital violence where intimate recordings are distributed without consent. Current laws only partially cover this, and victims often have to resort to copyright or data protection laws. The Criminal Code also only partially covers image-based violence, such as via section 201a, which only applies if a person was photographed in a “room that is particularly protected against viewing.” A new section (184k StGB) is aimed at cases like “upskirting” and “downblousing”, but lawyer Anja Schmidt calls for better protection under criminal law for adults, as only minors are currently protected. HateAid and the Frauenhauskoordinierung also see “biggest protection gaps” in image-based violence, and Green Party MP Renate Künast demands a separate offence for this in the legal code.

6. Protect private addresses of data subjects

Victims of online stalking and threats often have nowhere to turn for protection except their own homes, making it especially traumatic when their private addresses are also shared online. The contributors suggest that German law presents two challenges for victims looking to protect their addresses: Firstly, anyone can apply to the official registry of residents and find out the private addresses of any other person. While it is possible to block one’s address, this requires evidence of a threat – so when it is likely too late – and must be requested again every two years. Secondly, the obligation to include a local contact address for blogs and websites is a danger to those who run them, especially activists or freelance journalists. Solutions exist, such as allowing people to use law firms or co-working office addresses in the imprint, but need clarification and greater alternatives.

7. Informing those affected of their rights

According to the German government, media literacy and the knowledge of one’s rights are crucial in defending against digital violence. However, counselling centres report that many people are unaware of their rights. The Berlin project “Digital Angels,” which offers young women digital self-defence and other forms of support, reported that many of their members don’t know their rights, especially the application of “analogue rights” in the digital realm, or how to report and seek counselling. Superrr suggests that the government has a duty to work with states and municipalities to educate the public, including adult education centres. Victims can also be informed of their rights by counselling centres, although many are unaware of their existence.

8. Prevention: working with perpetrators

While perpetrator-focused approaches to the prevention of violence are gaining attention in certain areas, such as for preventing sexualised violence against children, they are hardly in discussions about digital violence. The European Network for the Work with Perpetrators of Domestic Violence, an umbrella network that advocates for taking digital violence seriously as part of intimate partner violence, runs a campaign intended to motivate potential perpetrators to examine their own behaviour and seek help if in doubt. In the same vain, prevention should also be emphasised in the guidelines, including education in schools on topics like surveillance in relationships, the consequences of stalking, the risks of sharing locations, and non-consensual sharing of intimate images.

To take this topic of prevention further and with some critical distance, how might we consider this proposed law, and also these responses by civil society, through the lens of state power and surveillance? Increasing resources to regulate and respond to digital violence offers a compelling solution in the short term, but is difficult to sustain, and it risks sidestepping the root causes of violence in society. How are these investments in the criminal justice system brought into dialogue with efforts to support and strengthen social and economic programs addressing the root causes of violence, online and offline? And how do both state and civil society approaches towards the criminalisation of digital violence relate to broader societal attitudes about crime, punishment and victimhood?

“Netzpolitik. A Feminist Introduction”: Discussing with author Francesca Schmidt about queer feminist perspectives on the governance of hate speech in Germany

Interview by Todd Sekuler

Francesca Schmidt is founding member and board member of Netzforma* e.V. – Verein für feministische Netzpolitik. She is also Programme Director for intersectional knowledge related to memory and transformation at the Federal Agency for Civic Education in Germany, and a former Senior Programme Officer at the Gunda Werner Institute for Feminism and Gender Democracy. Several months ago, CrimScapes team member Todd Sekuler, researching the criminalisation of online hate speech, spoke with her about her 2020 book “Netzpolitik. Eine feministische Einführung” (published by Barbara Budrich). Here is an edited transcript of their conversation (translated from German into English):

Thanks so much for talking with me today about your ground-breaking book, Netzpolitik. A Feminist Introduction,which I see to resonate closely with the approach of our research project – it investigates the implications and impacts of policies beyond their initial intentions. To begin, could you please explain to readers of our blog what Netzpolitik means?

Netzpolitik is not yet a properly defined term, but several issues have been used to define what Netzpolitik should include, such as: digital publicity, access to the Internet as a structure, access to content on the Internet, and copyright and data protection. Of course, these all overlap to some extent, they’re not as clear-cut as they seem. But they are what is most commonly understood as Netzpolitik – at least from an activist perspective, and also from a political perspective.

In the book, you distinguish between different feminist forms of activism on the Internet, or with the Internet, and you make a distinction between feminist Neztpolitik, cyber feminism, and Netzfeminismus. Can you briefly explain these distinctions?

Yes, so cyber feminism, when considered as part of a historical investigation of digitization and technology, can be viewed as an artistic engagement with everything that took place digitally and technologically in the 80’s, 90’s, and beginning of the 2000’s – and it was really not all that regulative. By that I mean it was not much about questions like: How do we want to shape the Internet politically, in the sense of regulative politics? So it was not in the sense of politics from a feminist activist perspective, but really of, let’s call it, “a job description” (Berufsbild) – an artistic engagement. Netzfeminismus would be what connects to that and makes it understood through an activist lens, and grasps the space of the Internet as a space in which one can become active as an activist, and negotiate social processes. It was similar in cyber feminism, but I think that was more of an artistic engagement. I distinguish feminist Netzpolitik from those two because it’s really about the processes – regulative political processes. Why is which server located where? Who decides how URL’s are distributed? Who decides what standards are used? And so on… I think that’s an approach for feminist Netzpolitik and less for Netzfeminismus, which sometimes also takes this up implicitly, sometimes also explicitly – but, generally, they are quite disparate fields.

Regarding the German Internet Enforcement Act, or NetzDG, you write in the book that it only very vaguely includes feminist concerns. Can you elaborate on that? What’s missing?

The book is a child of its time, and especially that chapter, because in the meantime, the Internet Enforcement Act has progressed a bit. But, on the whole, I think, the criticism remains that, first of all, law enforcement is trying to privatise, or let’s say, the first step to law enforcement is trying to privatise – in the sense that the platforms decide what is legally legitimate and what is not; what you have to report and what you don’t. From a feminist perspective, this is difficult, because we live in a society that is patriarchally structured and where things like rape jokes are made, frequently actually, and other problematic things are said freely, not just those that are relevant to criminal law. Or, in other words, the legal system doesn’t necessarily protect women first, or people who are read as women, or protect Black persons first. But, instead, it protects white masculinity, the subject from which everything else is defined. That’s why it’s a bit difficult, or let’s say, a balancing act, to argue with legal regulation from a feminist perspective. And that’s what I still find difficult about the NetzDG – in the end, the large corporations decide what is relevant for criminal persecution and what isn’t. And I also have a problem with the structural displacement it entails: The state has a certain responsibility to care for its citizens, and this policy simply shifts that responsibility to the outside and says, “You do this. You also provide the care, and then, down the line, we’ll just check in to see whether you did it right or wrong. And if there’s any doubt, we’ll help.” This issue hasn’t changed, but some things have improved a bit since the book. 

Recently, I was on a panel and there was someone from the field of media protection, and she was enthusiastic about everything. She said, “The NetzDG helped a lot! There is much less hate!” And I thought, I don’t know, most things are deleted according to community standards and not at all according to the NetzDG, because the hurdles are just too insanely high if you want to delete something on Facebook according to the NetzDG. And you’re also just a bit afraid that you’ll somehow be half arrested or something if you’ve given the wrong information. And we’re all not criminal lawyers, so we all don’t know the details of the law for determining the legality of postings. 

Regarding the law itself, and this concerns the penal code itself, many issues that relate to women, or other people who experience sexual violence, also sexualized violence in communication, are simply not covered because gender is just not a protected category, and race even less so. 

In the book, you show how financially driven actors, and not just activists, have played a role in promoting political responses to potential Internet abuses and violence, and also that the current governance structures in Germany assign responsibility for surveillance and content regulation to Internet platforms. You describe this as a “privatization” of law enforcement. Their interests, you point out, are above all financial in nature. Increasingly, however, the platforms are compelled to work in tandem with other structures that mobilise different logics to their work, such as the German Federal Crime Office. Within this complex field of interrelated actors, how might we better recognise and de-center or de-prioritise the financial interests of these companies? 

Well, it’s of course not possible to remove their economic motivations. We would have to change the capitalist system. We are not going to do that now. We have to be realistic. But to work in that direction, I think it’s important to first establish that this is the case, and to give the users a possibility to choose – to give them that knowledge, firstly, and to say, “Listen. With your data – especially with your interactions with other users, so not only with your login data, but with all that you do online, everything – it can all be used by other people to earn money.” That’s the first thing. And the other thing, the much bigger issue I think, is that we’re not clear as a society about how we want to communicate, and about what is and should be permissible, and what is or should not be permissible. This process of negotiation has been going on for some time now. I think we have to be clear, again, that we are not driven or influenced by financial interests, by the interests of companies that actually only have financial motivations, or want to do everything in a particularly conservative way.

You focus on hate speech in the sixth chapter of the book, which is about feminist Netzpolitik and digital violence, and you criticize that sexual, homophobic and transphobic violence are not explicitly named in the Council of Europe definition of hate speech. This is a very important point and had me thinking. Other forms of hate are also not explicitly mentioned, such as hatred towards the homeless or persons living with particular illnesses, such as HIV or corona. Do you have thoughts about how to respond to these inevitable failures of the politics of naming? This has long been a topic in feminist and queer studies, and one we surely won’t resolve so easily, but are there alternatives here?

Maybe definitions must remain agile – because knowledge levels change, because perspectives change. I actually like the Council of Europe definition, despite the fact that these points are missing, because, in other definitions of hate speech, everything isn’t in there either. They have something about human dignity and human rights. I find that difficult, because you still don’t name whose human rights this is about in the end. So that’s why I like this one so much, because it’s quite clear, despite everything, and it’s relatively old now, from ‘97; so for Internet age, it’s ancient. (laughs) But despite everything, it’s a relatively clear definition, that’s why I still like to use it. 

One major concern that you present in the book is about data surveillance, or dataveillance, and the oscillation of its functioning between care and control. You referred to it as a desire for “safety” that is also a desire for “inflexibility”. You also offer the following question as helpful to bring with you for mobilizing an intersectional analysis: “Which women are affected with what other discriminatory features and how?” Building on this question, how has surveillance changed in recent years as part of the Internet, and how are groups differently implicated in this history?

Surveillance no longer starts from individual questions, but rather entire contexts are now monitored in a nonchalant way, and it no longer means looking at individual persons or targets, but, instead, whole contexts can be monitored, and I think that’s changed through technologization, and also the Internet, but probably mostly through the technology that’s driving it. Despite everything, our social coexistence in terms of power and domination has not changed. That means that white cis hetero masculinity is still in the centre and is also the positioning that is least – well, not least affected by surveillance, because we are all affected by surveillance in some way, but least impacted by its consequences. The impact of racial profiling, for example, is more likely to be felt by Black people, or the impact of the fact that we have body scanners at airports everywhere, or increasingly so, is more likely to be felt by trans people who can have a different gender in their passport than what is shown on the scanner. And that’s why it’s usually the case that people who are already affected by, well, not only by discrimination, but also by the structures of discrimination by the state, also by these surveillance structures, are more likely to be affected and restricted, and to fear. So for them it’s not a security issue, but always an issue of insecurity. 

In the book, you also make a call for more quantitative research on digital violence among BiPOC (Black, Indigenous, People of Colour), LGBTIQA (lesbian, gay, bisexual, trans, inter, queer asexual) persons, and persons with migration backgrounds or disabilities. Are there warnings that we should take with us with regards to quantitative research projects? Or how do we engage in that type of research without it becoming another form of surveillance, or a normalising force? 

I understand such questions of science or research in this context more as a tool to make policy, not so much as a form of surveillance. But I honestly have to think a bit more about it. In the area of digital violence, I’ve been hoping for this research because there simply is not yet intersectional data in this form – to provide empirical input about what is happening. This is always a point from where you can start to argue, to advocate. And I think that’s very important. The things that exist in the German context are not very helpful from an intersectional perspective: a) because they have a wishy-washy definition of hate speech – so not very precise, kind of xenophobic and so on, and then everything can be hate speech; and b) because of the categories that are used, and this is the case in Germany for good reasons, that’s why it’s going to be difficult: only gender, age, maybe self-assigned label, in terms of positioning, but more likely not even that, and migration background is there, but, again, this is another category that can also be everything. It’s difficult. 

For some of the largest platforms, algorithms already play a big role in filtering out content that might be hateful. In your book, you present some of the effects of algorithms on the Internet – especially how they come to take on the sexist, racist, homophobic and transphobic logics as the societies within which they are created. With regards to hate speech, the decisions of so-called cleaners, or persons who work for companies hired by platforms to respond to flagged content, are being used to further refine these algorithms. So their responses are feeding into the algorithms to help refine them further. Some hope that algorithms could drastically reduce, if not entirely eliminate, human involvement in selecting out hate on the Internet, which is significant given that the humans who take on these jobs are faced with overwhelming amounts of hate on a daily basis. What do you see as the potentials, and limits, of using algorithms to respond to hate online?

First of all, I don’t think that algorithms can filter out hate speech so finely that in the end we will only need algorithms. By now, they can probably filter out certain images, videos and so on without any problems, but everything that is somehow unclear, I don’t think they can pick those things up, so I don’t know, but I would not think it’s possible for it to not be biased in the end somehow. Because we still only have a feeling for what hate speech, or digital violence, is, exactly. At least not for the things that are relevant under criminal law. There are things one person might feel are not good, and are violent, and then someone else could have a completely different feeling. But these experiences and perceptions are often based on structures. So just because we don’t recognise something as violence that someone else experiences as violence, that doesn’t mean it isn’t violence. And I think you’re not going to solve that through technology, for one thing. The other thing is, for me, algorithms won’t solve our social problem of hate. It moderates something away, but we still have a society that hates, and communicates that hate to each other, and will simply find ways to get around it. And I think that’s why, for me, it’s first about social processes, and then second about technology, and not using technology to solve social processes. We have to somehow manage that ourselves. I mean, I am sceptical about using technology to solve problems that we, as humans, have to solve with each other. If we can’t do that, then technology won’t do it either, because technology simply takes over our problems, and somehow continues them, and maybe pushes them in a new place where we can no longer really intervene. So if technology can support us, then great. But I think if it’s used to replace us and the social and political work we have to do, it won’t work.

In the book, you suggest that the collectivization of rights, rather than the current individualisation of right claims, might be a possible future direction to take in the area of Netzpolitik. Can you explain how that might look in relation to online hate? 

The collectivisation of rights, or the enforceability of women’s rights as a collective claim, has been sought from a feminist perspective for a relatively long time: The gender pay gap, for example, all things that particularly affect a group of women. But this hasn’t really been put into action in Germany yet. It already exists for many things, but not yet for women’s rights. If multiple women are affected by digital violence, then one could try not to drag every single one of them before the court, and to say you either have to file a complaint and deal with the public prosecutor, or do it through a civil court – and then there’s already HateAid, a victim-support NGO based in Berlin, but they can only do so many cases, so they rather focus on strategic litigation, which is great – but it would be clever to design the whole thing in such a way that you pull things together and make something like a collective class action. This would mean that not every individual would have to do it, and it could also build up pressure to act – on both the political question of regulation, but also on platforms so they finally do something significant, and not just sharpen their community guidelines a little bit, and always in their own interests, about what is considered sexist and or not, or what is racist or not. 

As you said, so much has happened in the year since this book was published. Among other things, there has been new legislation passed in Germany, the Fight against Right-Wing Extremism and Hate Crime, and also in Europe, the Digital Services Act (DSA). In your book you state that, even if a call for certain regulations triggered by digital patriarchal violence are understandable, the question remains as to whether patriarchal violence can be combated with equally violent structures, for example through sanctioning. To what extent do you view these new laws as embodiments of the violent structures you question to be valuable resources for responding to violence online?

To a full extent. (laughs) We haven’t changed anything in the basic structure. Of course, these patriarchal violent structures remain in place. That doesn’t mean that I reject them in their entirety. Of course, definitely not. But despite everything, yes, it’s just like technology. In fact, it’s just another form of technology…these forms of laws. They will stimulate social change at best, but we have to make the changes ourselves. 

And do you see hope in these new developments? 

Well, in Germany, some things will change with the European law, above all the NetzDG. I am fundamentally an optimistic person. That means, of course, I have hope. So… (laughs) Above all, I have hope if we critically question the existing systems of administration, and then look again to see how we actually want to deal with this. The DSA does this. It does a lot, but among other things, it also asks how money is being earned with hate, and tries to prevent that too. And I am also hopeful that our consciousness about that becomes more focused, and that we perhaps find another way to respond. But that’s just a building block. We also see how dependent we are on information from these networks, and how quickly that can simply be turned off, as has happened in various authoritarian countries. And how much media competence is necessary to bypass the whole thing. 

Why has online hate speech become such an important issue lately? There are various explanations I’ve heard, like particular terrorist attacks or individual murders motivated by hate, and I am curious what you see to be the reason. 

Yes, well, from a feminist perspective, digital violence against women has always been an issue – since the Internet came to exist – but it hasn’t been an issue of popular interest. And I think, um, exactly, it got more intense on a broad level in 2015 with the refugee movement. And that has just not abated. I find this interesting and still wonder how we might use these dynamics in a different way – in a way that doesn’t promote social antagonisms.  

That leads me to my last question: What made you decide to write this book? 

Well, I found the topic interesting and wanted to think more about it, and to write about how we might negotiate the issue together as a society, or how it could be negotiated from a feminist perspective in the areas of digitalisation, Netzpolitik, especially in the political sphere. When I started working on this, the discussion was mostly just about Netzfeminismus, and how we can push forward feminist concerns on the Internet. Those concerns didn’t necessarily have anything to do with the topic of digitization. I’ve always found that to be a bit of a shame, because, from a feminist perspective, we continue to use the structures without question. It’s good that we use them, but we didn’t question the design of the structures in which we want to participate. And I found that quite interesting to look at, and that actually interested me more than any activist questions. And then I also thought that these linkages between violence and surveillance, which I place next to each other in the book, have not been adequately discussed from a feminist perspective in the area of surveillance today, although that combined lens of course had a long feminist tradition.

Your current work is as Programme Director for intersectional knowledge related to memory and transformation at the Federal Agency for Civic Education. I understand you are working at the moment on trying to decolonise the Internet. Can you tell us a bit more about that before you go?

My current work is about trying to centre other stocks of knowledge in what technology can be and what technology should be, and where technology should change society. One of my main focuses now, with regards to knowledge about transformation, is to apply that to the development of technology. For what transformations do communities need technologies, and for what transformations don’t they need them? And how can you build technology so it makes sense for the communities? So not that just some dude in San Francisco thinks, “That’s mega cool, let’s use it somehow.” Do we all have to use it? Do we all need it? 

Thank you so much.

To read further in German, the book is available for purchase here.

The book’s introduction and conclusion have been translated into English and can be found here.