Mark noticed something amiss with his toddler. His son’s penis looked swollen and was hurting him. Mark, a stay-at-home dad in San Francisco, grabbed his Android smartphone and took photos to document the problem so he could track its progression.
It was a Friday night in February 2021. His wife called an advice nurse at their health care provider to schedule an emergency consultation for the next morning, by video because it was a Saturday and there was a pandemic going on. The nurse said to send photos so the doctor could review them in advance.
Mark’s wife grabbed her husband’s phone and texted a few high-quality close-ups of their son’s groin area to her iPhone so she could upload them to the health care provider’s messaging system. In one, Mark’s hand was visible, helping to better display the swelling. Mark and his wife gave no thought to the tech giants that made this quick capture and exchange of digital data possible, or what those giants might think of the images.
With help from the photos, the doctor diagnosed the issue and prescribed antibiotics, which quickly cleared it up. But the episode left Mark with a much larger problem, one that would cost him more than a decade of contacts, emails and photos, and make him the target of a police investigation. Mark, who asked to be identified only by his first name for fear of potential reputational harm, had been caught in an algorithmic net designed to snare people exchanging child sexual abuse material.
Because technology companies routinely capture so much data, they have been pressured to act as sentinels, examining what passes through their servers to detect and prevent criminal behavior. Child advocates say the companies’ cooperation is essential to combat the rampant online spread of sexual abuse imagery. But it can entail peering into private archives, such as digital photo albums — an intrusion users may not expect — that has cast innocent behavior in a sinister light in at least two cases The Times has unearthed.
Jon Callas, a technologist at the Electronic Frontier Foundation, a digital civil liberties organization, called the cases canaries “in this particular coal mine.”
“There could be tens, hundreds, thousands more of these,” he said.
Given the toxic nature of the accusations, Mr. Callas speculated that most people wrongfully flagged would not publicize what had happened.
“I knew that these companies were watching and that privacy is not what we would hope it to be,” Mark said. “But I haven’t done anything wrong.”
The police agreed. Google did not.
‘A Severe Violation’
After setting up a Gmail account in the mid-aughts, Mark, who is in his 40s, came to rely heavily on Google. He synced appointments with his wife on Google Calendar. His Android smartphone camera backed up his photos and videos to the Google cloud. He even had a phone plan with Google Fi.
Two days after taking the photos of his son, Mark’s phone made a blooping notification noise: His account had been disabled because of “harmful content” that was “a severe violation of Google’s policies and might be illegal.” A “learn more” link led to a list of possible reasons, including “child sexual abuse & exploitation.”
Mark was confused at first but then remembered his son’s infection. “Oh, God, Google probably thinks that was child porn,” he thought.
In an unusual twist, Mark had worked as a software engineer on a large technology company’s automated tool for taking down video content flagged by users as problematic. He knew such systems often have a human in the loop to ensure that computers don’t make a mistake, and he assumed his case would be cleared up as soon as it reached that person.
Image

He filled out a form requesting a review of Google’s decision, explaining his son’s infection. At the same time, he discovered the domino effect of Google’s rejection. Not only did he lose emails, contact information for friends and former colleagues, and documentation of his son’s first years of life, his Google Fi account shut down, meaning he had to get a new phone number with another carrier. Without access to his old phone number and email address, he couldn’t get the security codes he needed to sign in to other internet accounts, locking him out of much of his digital life.
“The more eggs you have in one basket, the more likely the basket is to break,” he said.
In a statement, Google said, “Child sexual abuse material is abhorrent and we’re committed to preventing the spread of it on our platforms.”
A few days after Mark filed the appeal, Google responded that it would not reinstate the account, with no further explanation.
Mark didn’t know it, but Google’s review team had also flagged a video he made and the San Francisco Police Department had already started to investigate him.
How Google Flags Images
The day after Mark’s troubles started, the same scenario was playing out in Texas. A toddler in Houston had an infection in his “intimal parts,” wrote his father in an online post that I stumbled upon while reporting out Mark’s story. At the pediatrician’s request, Cassio, who also asked to be identified only by his first name, used an Android to take photos, which were backed up automatically to Google Photos. He then sent them to his wife via Google’s chat service.
Cassio was in the middle of buying a house, and signing countless digital documents, when his Gmail account was disabled. He asked his mortgage broker to switch his email address, which made the broker suspicious until Cassio’s real estate agent vouched for him.
“It was a headache,” Cassio said.
Images of children being exploited or sexually abused are flagged by technology giants millions of times each year. In 2021, Google alone filed over 600,000 reports of child abuse material and disabled the accounts of over 270,000