Let's face it; social media and privacy are always going to be two warring parties. Sure, privacy controls help users define who can see what on sites like Facebook, Twitter, and Pinterest (and some sites offer simpler, more accessible privacy options than others). But in the end, social networks are social - you're actively sharing content with the world. Anybody who thinks they can maintain a pristine level of privacy and security while still enjoying the benefits of a social community is probably deluding themselves.
Facebook is no stranger to user privacy scandals. Scenarios involving information sharing and user tracking have popped up in the last couple of years. The FTC has even stepped in and performed their own investigations.
And recently, it was revealed that Facebook actively patrols user communications for unlawful activities. Is this a privacy betrayal from a company that sits on so much personal information about the country's inhabitants? Or is it a social good that allows Facebook to help prevent violent crimes, especially those involving children? Let us know in the comments.
A Winnipeg man is being charged with sexual assault, sexual interference, and internet luring after Facebook intercepted communications between him and a 13-year-old girl. According to Winnipeg police, the chat messages were sexual in nature, and were brought to their attention by Facebook near the end of July.
If the phrase "Facebook intercepted communications" caught your attention, I don't blame you. And I can't say that it's not exactly what you're thinking - Facebook is actively monitoring our chats and messages. Early last month, the company revealed that it's common practice for their teams to scan chats, searching for criminal activity. It's mostly algorithms that handle this part, but once something is flagged Facebook employees make the final decision on whether or not it merits calling the authorities.
Facebook algorithms give more weight to communications between users that don't really have a lot of connections. If two users have a giant age difference or live all the way across the country from each other - the conversation may be flagged. If two users don't share many friends or have never interacted with each other before on the site - their conversation may be flagged.
So it's fair to say that the "bad apple" conversations are going to be the ones most frequently caught up in the machine. But the final screening process for reporting malicious activity means that human eyes have to look at the chat transcripts - at least every now and then.
Back to Winnipeg, and to the 25-year-old man who was sending sexual messages to the underage girl. Authorities say that Facebook described the chats to them as "inappropriate" and "explicit."
Although Facebook notified police of the chats in late July, the suspect wasn't arrested until early last week.
And according to CNEWS, a sexual assault had already taken place. There's no word on whether the police received the tip from Facebook before or after the alleged assault.
So, police now have the Facebook data to use in prosecution, but it didn't actually stop a young girl from being sexually assaulted. It's unclear if that's because Facebook caught it late, police failed to act in time, or the assault had already occurred before anyone caught wind of the inappropriate chats. Really, it's not right to blame anyone here except the pedophile who allegedly performed the violent acts - but it does show that Facebook's monitoring program isn't perfect.
However, it also demonstrates that it's possible for Facebook to do some good with their chat monitoring. It's also worked before (to perfection), according to Facebook.
When the chat monitoring story first broke, Facebook told Reuters a story of how the program had led to the arrest of a man who was in the process of soliciting a 13-year-old girl on the network. Here's how Reuters told it:
A man in his early thirties was chatting about sex with a 13-year-old South Florida girl and planned to meet her after middle-school classes the next day. Facebook's extensive but little-discussed technology for scanning postings and chats for criminal activity automatically flagged the conversation for employees, who read it and quickly called police.
Officers took control of the teenager's computer and arrested the man the next day, said Special Agent Supervisor Jeffrey Duncan of the Florida Department of Law Enforcement. The alleged predator has pleaded not guilty to multiple charges of soliciting a minor.
"The manner and speed with which they contacted us gave us the ability to respond as soon as possible," said Duncan, one of a half-dozen law enforcement officials interviewed who praised Facebook for triggering inquiries.
There's really no denying than it can work. Scanning chats for suspicious activity can help to thwart child predation.
Of course, there are still privacy concerns to consider. Not everyone is convinced that Facebook has the right to monitor "private" communications. Then again, you are using their (free) service to send and receive communications, and at least now it's with the public knowledge that the company may be monitoring them. Plus, they are not the only ones engaging in this type of monitoring.
Facebook won't comment on the particulars of the Winnipeg case, but they tell me that they have zero tolerance for this type of activity and are "extremely agressive" in reporting it to the authorities.
Here's their full statement:
We have zero tolerance for this activity on Facebook and are extremely aggressive in preventing and identifying inappropriate contact as well as reporting it and the people responsible for it to law enforcement. We're constantly refining and improving our systems and processes. However, we feel we've created a much safer environment on Facebook than exists off-line, where people can share this material in the privacy of their own homes without anyone watching.
Have they created a "much safer environment?" In your opinion, is it okay for Facebook to patrol chats in order to help identify possible criminals? Is it a good program conducted in good faith? Is it worth giving up a little bit of your privacy for the greater good?
Or do you think that Facebook should cease this type of monitoring? Let us know in the comments.