Earlier this month, as anyone who has been anywhere near social media or a newspaper even here in the UK will know, a draft opinion from the US Supreme court was leaked. It showed a majority opinion that the decision that has protected the right of Americans to access abortion services for the past almost 50 years should be overturned. It’s a draft, sure, so no guarantees what the final version will say, but it – and the current composition of the court – is more than enough to cause appropriate consternation that the demise of Roe vs Wade is likely.
What is fairly guaranteed though is at least some of what will happen next if this overturn does happen. The reversal of Roe vs Wade doesn’t itself doesn’t restrict abortion directly. But several states have already passed heavily restrictive abortion laws or bans that will be enacted right away if Roe vs Wade falls. It’s thought that 26 states are likely to ban abortion in one way or another.

That implies that abortion clinics would be closed for 41% of US women of reproductive age. Of course there may be knock-on effects even for services in states where bans are unlikely, as the people in need of their services – at least those with enough resources to be able to do so – travel hundreds of miles out of state to places where the procedure can be safely carried out.
The Guardian published an informative visualisation of how the US “abortion deserts” would likely dramatically grow if Roe vs Wade is overturned.

To be fair, “grow” is the correct word – it’s clear from the “before” picture that abortions were already quite inaccessible for many people. Mississippi, for instance, has only a single abortion clinic in the whole state.
Other people in need may turn to illegal, dangerous methods of abortion. As noted previously, banning abortion doesn’t necessarily do much to lower abortion rates. Rather, it shifts the nature of the procedures taking place from safe, legal ones to more dangerous, clandestine methods.
It’s of note that a ban on abortion, whilst vocally supported by some individuals and groups within the US, is not at all the preferred option of the majority of the US population.
The Pew Research Centre reports that 61% of Americans believe that abortion should “be legal in all or most circumstances”. Only 8% of the surveyed population thought that abortion should be illegal without exception. And plenty of people had a far more nuanced view than the stereotype of the (appallingly-named, in my opinion) “pro-choice” vs “pro-life” argument would suggest.

The move is perhaps unsurprising in some ways given the recent trajectory of the US. Perhaps most notable in recent times was a bizarre and unsettling “workaround” law enacted in Texas that allows any private citizen who feels like it to sue anyone in Texas who in some way assisted with an abortion that occurs after 6 weeks of pregnancy – a time frame many people would not even have realised they were pregnant within.
Nonetheless, moving towards abortion bans is against the overall trend in a world where countries are typically going from more restrictive to more liberal legislation. The Council on Foreign Relations reports that:
The global trend in abortion law has been toward liberalization. Since 2000, thirty-eight countries have changed their abortion laws, and all but one—Nicaragua—expanded the legal grounds on which women can access abortion services.

How could your data be used against you?
Of potentially more direct relevance to the usual subject of this blog, the possible fall of Roe vs Wade has brought a necessary focus on a number of concerns around how data could be (ab)used to penalise people seeking abortions in an environment where it becomes illegal in certain US states.
Data on what you searched for
First up, search engine records. These days so much of the information we access starts off as a Google search. It’s very obvious that if someone was considering abortion services they’d often research the options on the internet. As many companies collect your search history over time – often directly linkable in theory to who you are, especially if you are logged into a service – this means the fact that you’ve expressed interest in abortion-related keywords becomes a matter of record in a database.
Seth Stephen-Davidowitz, author of the fascinating book ‘Everybody Lies‘, believes that our web searches can illuminate secrets that we would never even tell each other.
I have spent the past four years analysing anonymous Google data. The revelations have kept coming…I am now convinced that Google searches are the most important dataset ever collected on the human psyche.
If abortion becomes illegal in some states then the police will presumably be able to investigate occurrences of it with their full powers. There are many ways that US law enforcement agencies can already access people’s data – including any “legitimate” access rights they have to your phone directly, as well as their ability to issue warrants to technology companies requiring them to hand over a list who was in a particular geographic location or searched for particular keywords in a given time frame. These are perfectly legal methods in use today for crime investigation, and don’t require any sort of Edward-Snowden-revelation style secret infrastructure.
In fact, there’s a recent example of something exactly like this happening to be found in the case of Latice Fisher, who reported giving birth to a stillborn baby. The prosecutors wanted to charge her with murder and used the fact she had searched for abortion methods online as evidence of intent.
Web search data has been routinely used in many other types of cases. Amazon has previously been ordered to turn over recordings from an Amazon Echo to a court considering a murder.
Data on where you are
Moving beyond info searches, it’s often been pointed out that these days people are likely to be carrying a location tracking device with them wherever they go, in the guise of a smartphone. The fact your phone usually knows you location is great when you need to find directions to where you want to go. But other uses are less positive.
Unless you explicitly disable the location features of your device, “ad tech” companies are often able to serve up adverts to your phone, tailored to where in the world you are, via “geo-fencing”. There are arguably reasonable uses of this – perhaps an advert that leads to a discount to a shop on the street you’re walking up can, very occasionally, feel like a good thing rather than an invasion of privacy. But there are also extremely unreasonable use cases.
Even several years ago, in an effort to bring something akin to the traumatic experience clients of US abortion clinics often face in having to be escorted into the building past hordes of abusive anti-abortion activists to the digital world, John Flynn, the CEO of Copley Advertising, was apparently offering up a service to target adverts to people’s phones situated in such a clinic.
That is to say, when the patient is sitting in the waiting room of a Planned Parenthood clinic browsing their smartphone, they’ll be served up adverts that “know” that’s where they are. And “adverts” here really means anything someone is prepared to pay to show to these users. In this case that’d potentially include anti-abortion propaganda.
This is not a merely theoretical threat. Flynn claimed to have sent semi-innocuous seeming “Click here to be informed about pregnancy” adverts from clients like RealOptions. This is an organisation very likely less interested in actually empowering people with information to make the best choice for themselves, and more in their stated mission of “empowering and equipping women and men to choose life for their unborn children through the love of Jesus Christ”.
As damaging and offensive as this concept is, it may seem like an anonymous one-way dataflow which just mass-broadcasts to everyone within a certain geo-area. But these days marketers often build personas that piece together other datapoints on an individual – demographics, income, interests et al – which when used in conjunction with the knowledge that the recipient is in a location known to be a Planned Parenthood clinic makes it potentially a lot less anonymous.
The original Rewire report also mentions the possibility of encouraging the ad recipient to literally hand over their personal info; their name, address and so on. One can imagine for instance a seemingly innocuous banner ad, unrelated to any clinical service, that leads to a “you won a competition” page that has a form asking for those details.
Non-live location data showing movements to and from places, including clinics that perform abortions, is also readily and legally available. And it’s not expensive – Motherboard bought a file containing a week’s worth of location data for 600 Planned Parenthood locations in the US for $160 from a company called SafeGraph.
The data they supply provides data to answer questions on how often people visit, for how long, and where they come from and go to afterwards, including an idea of whereabouts they live based on where the phone usually resides overnight. The data is sourced from the innumerable apps that people have installed on their phones that, often invisibly, send the user’s location data to companies for a fee.
It’s aggregate data, so doesn’t directly identify an individual. However, reportedly some segments of the data have just 4-5 participants in them, which makes the ability a bad actor has to reidentify them that much easier, particularly when it includes details such as what type of phone someone has.
Besides, for years researchers have warned that re-identifying people from supposedly anonymous datasets is not as hard as one might think. One study which, analysed 1.5 million individuals, found that people move in such a unique fashion in comparison to each other that knowing 4 datapoints was enough to uniquely identify 95% of the individuals concerned in their dataset.
If we want to see a real-world example of where something like this was apparently done, then there’s the case of Monsignor Jeffrey Burrill, a Catholic priest was forced to resign after an investigation, detailed by the Pillar, apparently used commercially available data to determine that he had visited gay bars and used the Grindr app.
…an analysis of app data signals correlated to Burrill’s mobile device shows the priest also visited gay bars and private residences while using a location-based hookup app in numerous cities from 2018 to 2020…
According to commercially available records of app signal data obtained by The Pillar, a mobile device correlated to Burrill emitted app data signals from the location-based hookup app Grindr…
Besides, identifying specific individuals may not be the intent for users of this data. If access to abortion services is eradicated from some states, then analysing people crossing state lines to get to their nearest option would be an easy way to target facilities that provide that service. The threat of this becomes higher if states such as Missouri enact their proposals to make helping people get abortions even in other states illegal – in this case with a definition of “helping” apparently large enough to include driving someone across state lines or allowing access to certain websites.
Data from period tracker and pregnancy apps
There are many popular apps people can use to track and monitor aspects of their reproductive health such as their menstrual cycles and details of their pregnancy. There’s a concern that if abortion was criminalised then this data could be used as evidence towards someone having had such a procedure, if for example the police subpoenaed the company to get the data, or it was obtained via a leak or even a regular data broker. Many of these apps are not covered by health data protection regulations such as HIPPA in the US. And whilst most of the more reputable options will likely be built with privacy policies that imply that they understand they shouldn’t share your personal medical data, that doesn’t mean they’re not sharing anything.
Consumer Reports analysed 5 popular apps in this space, and noted that, whilst they did not detect any major security issues, all 5 of them shared user data with advertisers and marketers.
More dramatically, an earlier investigation by the Wall Street Journal, one of the more popular apps, Flo, was found to be sending very sensitive personal data to Facebook .
Flo Health Inc.’s Flo Period & Ovulation Tracker, which claims 25 million active users, told Facebook when a user was having her period or informed the app of an intention to get pregnant, the tests showed…sent with a unique advertising identifier that can be matched to a device or profile.
Following publication of the report, Flo stopped that practice.
These issues are impactful far beyond Roe vs Wade
In all this, I don’t mean to blow things out of proportion. It’s not likely that your neighbour is going to inadvertently wander into some kind of data shop and become aware of your inner thoughts. However, as some of the examples above show, it’s also not impossible for citizens to target each other using data-based methods even for activities that are perfectly legal. For it to happen even rarely is still that number of times too many. And for anti-abortion pressure groups that aim to attack the whole class of people who need or support abortion services as opposed to a specific individual, finding a needle in a haystack is a lot easier when there are several needles.
The calculus also changes should you be a resident of a US state where abortion becomes illegal, or live in a non-US country with similar or more illiberal and punitive legislation. Since at least the Snowden revelations, we’ve known that governments and law enforcement agencies may collect huge amounts of information on us. They legally have great power to acquire and act upon it. Naturally they can and do make use of any commercially available data too. If your opponent becomes the state as opposed to an annoying neighbour or a random Twitter user, things are different.
It’s worth noting that few of the data-based harms alluded to above are specific to abortion-related data. It’s just clear to even us non-Americans that abortion is a very politically charged topic in the US, which brings a certain focus on it. Plus that it can be an extremely personal, challenging and emotional experience to go through for some people who take that route. Even for those who absolutely know they made the right choice to go ahead with the procedure it often was a difficult decision. Circumstances can also be such that high levels of distress are present. We should of course prioritise fighting against extra harm being done to people at a time of vulnerability.
Nonetheless, irrespective of your personal view on abortion policy, the general issue remains of how to control the untold amounts of data we generate every day – much of which contains people’s private information, or at least enough signal to potentially derive it from. This data can be used to target, manipulate or even threaten people, and hence is something we must grapple with the management of in the most general terms. The tools and technology that help extract meaning from masses of data are likely only going to get faster, better, cheaper and easier to use in the future, putting the prospective ability to persecute-by-data into the hands of ever more people if left unchecked.
What can you do to protect your privacy?
In the mean time, what can you do about it? Well, my view is that a lot of the structural fixes in practical terms need to be done by the companies and other organisations that are collecting, repackaging and distributing the more invasive or risky sets of data. That’s a story for another day. But as a starting point, the Electronic Freedom Foundation has suggestions of what tech companies can do to help protect users’ digital rights, and Techcrunch has 8 ideas of how tech companies can help to specifically protect abortion rights.
For now, as a user, there are a few guidelines we can consider. Again, the EFF has published a list of tips to improve your digital privacy. They’re presented in the context of the fall of Roe vs Wade, but in reality they’re very much applicable in general. They’re worth reading through in full, but some of the immediately doable tips include:
- Use privacy focussed web browsers, or at least install browser extensions that help protect your privacy
- Consider using multiple email addresses, phone numbers or other things companies may try to identify you by.
- Check the privacy settings for each app on your phone and any services you use. Explicitly disable any permissions the app doesn’t actually need. Not every app needs access to your location, your camera or your microphone.
- Use a trustworthy VPN.
- Consider turning off your phone or disabling location services if you are going somewhere you wouldn’t want to be surveilled at.
- Compartmentalise – keep sensitive activities separate from routine ones.
- Use secure and encrypted messaging services when chatting.
- Get consent from people before sharing their data, including photographs of them.
- Delete data or accounts that you don’t need any more.
- Keep sensitive data somewhere other than your main device if you can.
- Disable the ad tracking identifier on your phone. Instructions on how to do this are here.
Being choosey in what devices, apps and services you sign to in the first place might also be helpful. In the world of mainstream phones, traditionally I think Apple is felt to have been more focussed on privacy and security than its Android competitors. It seems like this is still a commonly held opinion, although both types of devices are clearly vulnerable to some extent. Plus Apple devices tend to be more restricted and more expensive than some alternatives, putting them out of the reach of some potential users or use-cases.
Besides, you can potentially undo a lot of any default security by installing invasive apps. A recent paper found no clear difference in privacy between the iOS and Android apps they studied.
If an app is “free” then, unless it’s someone’s hobby, it’s worth considering how it’s being funded. Often it may be through targeted advertising or harvesting your data for resale. The tradeoff may be worth it, particularly for users who are resource-constrained, but it should be done with awareness.
In the iOS app store, many apps now have an “App Privacy” section that details what data the app collects and uses. I was going to share the one for Facebook below, but it’s so many screens full of information that I’ll just share the top couple. There’s far more below what’s displayed here.


It should be noted that the information in that section is self-reported by the developer of the app. Thus it cannot be relied on as a full picture, particularly if an app was developed by a more nefarious actor.
Since the release of iOS 15.2, there’s also been an app privacy report feature you can turn on to monitor which apps are accessing potentially sensitive data, how often and which servers they’re communicating with.