Software Flaw Exposed Facebook Moderators’ Info to Suspected Terrorists

Software Flaw Exposed Facebook Moderators’ Info to Suspected Terrorists:

A flaw in software used by Facebook to moderate offensive content exposed more than 1,000 employees to suspected terrorists online.

According to a report from The Guardian’s Olivia Solon Friday, the social network found that moderators in 22 departments had their personal profiles become visible to suspected extremists after the issue was discovered in late 2016.

The profiles of the employees, who were tasked with removing terrorist propaganda and other banned content, began “automatically appearing as notifications in the activity log” of Facebook groups whose administrators were flagged and removed. The remaining members of those groups were then able to view the moderators’ personal details.

Roughly 40 of the 1,000 exposed employees worked in Dublin, Ireland, at Facebook’s counter-terrorism unit. Of those 40, six were labeled “high priority” after the social network determined “their personal profiles were likely viewed by potential terrorists.”

Speaking with The Guardian, one of the six employees, an Iraqi-born Irish citizen who asked to remain anonymous, stated that seven people linked to an Egyptian terrorist group sympathetic to Hamas and ISIS had seen his profile.

The moderator, who worked as a contractor for Facebook on behalf of Cpl Recruitment, fled the country shortly after over fears of retaliation.

“The only reason we’re in Ireland was to escape terrorism and threats,” he said, revealing how numerous members of his family had been beaten and executed in Iraq.

Although Facebook initially “offered to install a home alarm monitoring system and provide transport to and from work” to high-priority moderators, the Iraqi-born man felt he had become too vulnerable.

“When you come from a war zone and you have people like that knowing your family name you know that people get butchered for that,” he added. “The punishment from ISIS for working in counter-terrorism is beheading. All they’d need to do is tell someone who is radical here.”

After five months in eastern Europe, the moderator returned to Ireland in May after running out of money.

“I don’t have a job, I have anxiety and I’m on antidepressants,” he said. “I can’t walk anywhere without looking back.”

The moderator has now filed a legal claim against both Facebook and Cpl Recruitment, seeking compensation for the physiological issues faced since the security breach.

The Guardian report also revealed how content monitors, who, according to the moderator, “come in every morning and just look at beheadings, people getting butchered, stoned, executed,” were seemingly required to use their own personal profiles while doing their work.

“They should have let us use fake profiles,” he said. “They never warned us that something like this could happen.”

In a statement confirming the incident, Facebook asserted it had taken technical steps to stop such an issue from occurring in the future.

“We care deeply about keeping everyone who works for Facebook safe,” a spokesman said. “As soon as we learned about the issue, we fixed it and began a thorough investigation to learn as much as possible about what happened.”

In total, the software bug remained active for up to a month and retroactively exposed profiles that had flagged terrorist content as far back as August 2016.

H/t reader squodgy:

“That’ll learn’em!
Will the yid offer compensation I wonder?”

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.