Actually, Facebook Changed Its Terms To Cover That Experiment After It Was Over

    July 1, 2014
    Chris Crum
    Comments are off for this post.

The plot thickens.

As you may know, it has come to light that Facebook ran an experiment with nearly 700,000 users in 2012, showing how it could manipulate emotions by showing users more positive or negative content in their News Feeds.

As some have pointed out, Facebook’s terms say it can use users’ info “for internal operations, including troubleshooting, data analysis, testing, research, and service improvement,” with research being the keyword in this case. Only one problem: that wasn’t actually in the terms when Facebook carried out the experiment.

Forbes points out that Facebook made changes to its data use policy four months after the experiment, and yes, that bit about research was one of those changes.

So if you were already upset about Facebook’s little test, there’s some more fuel for the fire. For some reason, images of Mark Zuckerberg sweating bullets while being grilled about privacy on stage at the D8 conference are coming to mind.

Facebook now has Consumer Watchdog on its back over the whole thing. The organization put out a press release calling Facebook’s research “unethical”.

“There is a longstanding rule that research involving human subjects requires informed consent. The researchers clearly didn’t get it,” said John M. Simpson, Consumer Watchdog’s Privacy Project director, “Sleazy, unethical behavior is nothing new for Facebook, so I’m not really surprised they would do this. The academic researchers involved with the project and the National Academy of Sciences, which published the results, should be ashamed.”

“Facebook’s TOS — like those of most Internet companies — are cleverly crafted by high-priced lawyers so as to be virtually indecipherable to the average user, but allow Facebook to do essentially whatever it wants commercially,” said Simpson. “It protects Facebook and its sleazy business practices, but it in no way provides the level of informed consent that is expected and required when doing research with human subjects.”

Obviously that was before it came to light that the part about research wasn’t even in the ToS when the experiment was carried out.

“Facebook has no ethics,” said Simpson. “They do what they want and what is expedient until their fingers are caught in the cookie jar. Like the rest of the tech giants, they then apologize, wait a bit and then try something new that’s likely to be even more outrageous and intrusive. Silicon Valley calls this innovation. I call it a compete disrespect for societal norms and customs.”

Yes, the current outrage will no doubt die down within the week, and Facebook will carry on being Facebook. And Facebook users will carry on using Facebook.

Image via YouTube

  • Loki57

    This article appears to be a follow up to what Chris did the day before. Comments to that show how confused readers are as to the social ethics issues, versus business law, and academic research ethics, laws, regulations, and process.


    Breaking the related issues into a series of comments on issue sets, UCSF seems to get a pass in that Facebook study, as further reports clarify that the named author there was a Cornell post-Doc at the time, and was hired by UCSF only after the fact.


    Note how Cornell has assigned a talking mouthpiece, who is not the author of that article, to manage public information about this.

    Forbes is reporting additional info, that suggests Cornell’s IRB may have fatal flaws, as not only are there varying story versions about what was done up front or after the fact for IRB review, and Facebook vs Cornell involvement with human subjects vs data only, but age 13-18 subjects (and in turn, others below 13 who routinely lie due to Federal laws) were apparently included in this study. That doubles legal issues of incapacity to consent, and of competency of Cornell’s IRB as a rubber stamp vs meaningful reviewer.


    We might also question whether the nature of this study required notice with ability to opt out from being used as a subject, as to manipulation, or data set inclusion.

    In addition, the study reports, “People who viewed Facebook in English were qualified for selection into the experiment.”

    That implies that EU and other privacy laws, often tighter than those of the USA, likely apply to some victims of this fraud, and that additional civil and criminal violations were likely perpetrated. A Facebook (or any other) TOS cannot assert a choice of jurisdiction and venue, to be exempt from and above such laws.

  • Loki57

    Princeton may also get a pass, if their only connection is a PNAS
    journal editor, who privately may have been the first one to raise
    serious issues about the nature and conduct of the study. It’s unclear
    how involved that editor was in producing the published study as a party
    to it, versus as an arm’s length auditor and quality filter for the



    In addition, with Cornell now admitting its people were involved up
    front before data collection, but claiming they did not do a human
    subjects study because Facebook implemented it, one might wonder if
    Cornell harvests IRB members from secret FISA courts?


    Doesn’t that sound a lot like Google or Facebook not giving NSA, CIA,
    and the spectrum of 16 related secrets agencies access to there servers,
    under the misrepresentation that by providing API’s for data
    harvesting, they only offered the ability to make requests that Google
    would automatically fulfill?

    National Geographic may do the best job trying to simplify and focus the
    issues for a lay audience, but can people who lack advanced knowledge
    of academia, Federal bureaucracy, general law, and technology, seriously
    parse more than a general sense of being victims, too ambiguous to be
    capable of meaningful consent in any case?


    That’s a problem not just in IRB backed studies, but TOS and other coercion or adherence contracts, medical procedures in general, and politics, juries, and government and society in prolific forms.

  • Loki57

    Last in this series of comments and added information, there’s a need to consider far broader aspects of information harvesting, including Google, telcos and spooks, other multi-billion dollar public and private companies, etc. Among
    reporter bio’s in recent Atlantic stories is found Harvard’s Berkman Center:


    That enters broader areas of how these issues relate to other entities and society.

    OK Cupid, Match Media, and IAC/I have also done this kind of manipulative studies, but they’ve had TOS banning university or other third party types from attempting it there.

    They draw on Harvard math grads (across campus from the Berkman Center that attempts to audit and challenge such abuses), and a Stanford MBA on top in the case of Sammy Yagan (ex- eDonkey, which survives, sort of, as eMule, after major legal problems and rejection of predatory advertising embeds by users), with lots of Cornell engineers behind the scenes at Humor Rainbow, inc, and Match Media (which subsequently acquired some EU based subsidiaries and operated them illegally).

    They’re not subject to HHS IRB rules, but have crossed into illegal activities, and are overdue for more effective oversight and prosecution. Sammy is even so arrogant as to have included the incident where they added a fraudulent feature to track responses to liquor ads and trick users into misrepresented feature uses, while selling alcohol illegally toward under age users, in a Powerpoint backed talk bragging about their data harvesting at a marketing conference. Of course the conference talk pushing how wonderful IAC/I and Match Media and its divisions, including Sammy’s former OK Cupid, was placed on Vimeo, another IAC/I property.

    What is meaningful informed consent, when most subjects could not accurately map or identify various such parties and their relationships?

    Obviously this wasn’t a Stanford Experiment or Milliken Study. Do businesses deserve a pass were they to conduct such experiments, that have long been illegal for academia based on histories of abuse?

    What about video games, where users willingly seek and buy experiences that are often more intense, and can be therapeutic for some, traumatic for others, in often unpredictable ways?

    Can it even be “equal protections of law” for market researchers or marketing, to study or manipulate users (and without meaningful consent, if that matters?), in ways that are illegal and not just unethical for academics?

  • Loki57

    Further, it turns out Facebook was under a Consent Decree with the FTC based on past abuses, with specific requirements for explicit, distinct opt-in for future uses of Covered and Unspecified private information. Therefore, NOTHING that could have been in ToS before or after the changes was legally adequate notice or consent.



    Case documents index, and further discussion, at the above.

    EPIC has filed a formal complaint over Facebook’s apparent blatantly illegal violations of the terms of that legal mandate for them to not weasel around in ways that abuse users or violate their rights. It appears the FTC may be obligated to impose additional consumer fraud sanctions on Facebook.