2012 CyberWatch Year in Review: Social Media

December 15, 2012

Tagged: , , ,

Categories: CyberWatch, News and Announcements, Social Media CyberWatch

Table of Contents

A new initiative for the Citizen Lab, Social Media CyberWatch explores current affairs relating to social media platform governance. These include issues of privacy, security, credibility and trust in relation to platform policies as well as policy-driven technical developments.

This year-end report summarizes several trends and noteworthy happenings of the past 12 months, including an increase in government user data requests, a community governance decision-making debacle, and controversies around various privacy-oriented technical implementations.

Government Requests for Social Media Data

As revealed in a number of documents published this year, governments from around the world are increasingly interacting with online service providers in order to surveil and censor citizens. To varying degrees, social media platforms comply with government requests and provide notice to users of such actions.

Transparency Reports: User data requests on the rise

Transparency reports released by web powerhouses Google and Twitter this year revealed a rise in government-initiated censorship and data access requests. Google’s report states that the United States, by far, made the most requests for user data, followed by India, Brazil, France and Germany. The United States made 7,969 requests for user data between January and June 2012, an increase in 33 percent from the same period last year.

Twitter’s report similarly shows that roughly 80 percent of the 849 government requests for user data in the first half of 2012 came from the United States. Twitter stated that it received more requests for user data in the first half of 2012 than in the entirety of 2011.

Twitter censorship

Twitter launched its country-specific censorship platform this year, which can withhold certain tweets or accounts from being displayed in certain countries. Twitter claims that censored material — referred to as “country withheld content” — is only blocked reactively; currently there is no automated filtering taking place. The rationale for “withholding” tweets is to ensure compliance with the platform’s rules, which clearly state that international users “agree to comply with all local laws regarding online conduct and acceptable content.”

Upon the announcement of this platform, citizens took to Twitter to protest the move, with a particularly vocal group from Saudi Arabia decrying censorship, and wondering if the move was at all related to Saudi Prince Walid bin Talal’s US$300 million investment in the platform. This platform made major news when Twitter blocked the account of a German Neo-Nazi group on the grounds that pro-Nazi speech is banned in Germany after Twitter received a request to block the group from the German government. For free speech advocates, this enforcement sets a dangerous precedent for social media censorship worldwide.

Who has your back?

This May, the Electronic Frontier Foundation (EFF) released When the Government Comes Knocking, Who Has Your Back?, its “Second Annual Report on Online Service Providers’ Privacy and Transparency Practices Regarding Government Access to User Data”. The report assessed various Online Service Provider’s (OSP) performance in the categories of committing to inform users of government data requests, transparency about when and how often they provide data to government, and whether they fight for user rights in court or in the United States Congress. Twitter performed admirably, scoring 3.5 out of 4 stars, with Google and LinkedIn at 3, Facebook at 1.5, Microsoft, Apple and Yahoo at 1 and FourSquare and MySpace at 0. An example cited in the report outlines how Twitter defended user privacy in court in the face of a government subpoena. Specifically, Twitter defended Malcolm Harris, an Occupy Wall Street protestor, in New York City Criminal Court, but eventually provided the requested account data after months of legal proceedings. The provided Twitter posts were used as evidence to convict Harris of disorderly conduct.

Back to top

Platform Citizenship

Social media platforms can be seen as quasi-public spheres; while individuals can ostensibly gather there to discuss issues, their space for doing so is controlled by private interests with little accountability to users. A recent controversy on Facebook sheds light into the tension between public and private interests on social media platforms.

“Death of democracy” on Facebook

Facebook faced criticism for taking steps to “abolish democracy” in its platform governance decision making processes. For the past three years, Facebook required itself to provide notice to its users about proposed changes to its privacy, data use, and terms of service policies. Users would then have a chance to comment on the proposals, and if enough comments were received, a vote would be triggered. Users would choose between maintaining the existing policy or adopting the proposed changes. The vote would be binding if 30 percent of Facebook users took part in the vote.

Since the system was put in place, only three votes have ever been triggered; the turnout rate at a vote earlier this year was really low at 0.038 percent. Facebook has now replaced the voting mechanism with a Q&A format session where users can voice “more meaningful feedback” to Facebook’s policymakers. Due to their earlier policies, however, the company was required to (ironically) ask users to vote on their losing their right to vote. That vote saw a turnout of roughly 660,000 people with nearly 90 percent voting against the proposed changes. Nevertheless, the turnout represented only 0.06 percent of the userbase, and to no surprise, Facebook was able to proceed with its policy updates. These changes ensure Facebook will be able to update its policies without having any formal accountability to user preferences when approving the changes.

Back to top

Data & Policy Consolidation

This year saw several leaders in online data provision consolidate policies from across different product lines into single documents in order legitimize company-wide data exchanges and aggregation.

Google faced criticism in the EU after consolidating the privacy policies of its various services. Critics pointed out that this move facilitates data sharing among the different Google products, which, for example, would support targeted advertisements on YouTube based on the contents of one’s emails. Furthermore, users do not have control over what data can be shared between different Google services, and hence, different contexts.

In a similar move, Microsoft combined its disparate terms of service into a singular services agreement that also explains that data may be shared across product lines. However, Microsoft differentiated itself from Google by claiming to not mine the contents of its users emails, positioning itself as a more privacy-conscious organization than Google, whose approach it blasted in front-page advertisements. The advertisements were seen to be disingenuous by some observers, who noted that Microsoft and Google largely have similar data use practices, and that while Microsoft claims it does not engage in targeted advertising, its policies do not make this explicit.

In the face of privacy criticisms, Facebook has updated its data use policy to include provisions stating it “may share information” with the various branches of its businesses. This clause is widely viewed as enabling user data consolidation between Facebook and Instagram, its recent acquisition. As with the Google case, this will facilitate targeted advertising across different business lines.

Back to top

Identifiability & Tracking

This year saw technical initiatives with direct implications for the identifiability of content and user behaviour online experience mixed degrees of success in multi-stakeholder struggles over balance between commercial interests and user privacy rights.

Do Not Track web privacy initiative controversy

The Do Not Track (DNT) proposed web standard was the subject of significant discussion earlier in the year when Google Chrome became the last major web browser to implement it. The standard is employed to inform web servers of a web client’s desire to opt-out of third-party tracking mechanisms employed on the served web pages. Technically, browsers add this directive as a field in the headers of standard http requests (the protocol that forms the primary basis of data exchange on the web). Servers would then detect the presence of the field react accordingly.

Microsoft received criticism and praise when it announced that the latest version (v. 10) of its Internet Explorer browser would enable DNT by default for all its users. Privacy proponents were encouraged to see privacy being adopted as the default, arguing that advertising industry self-regulation is inadequate at protecting user data. However, Google, whose revenue primarily comes from advertising, and Mozilla, whose funding comes primarily from Google, derided the move. Additionally, Yahoo asserted that its services would ignore Internet Explorer 10 DNT directives, while one of the DNT standard’s authors (employed by Adobe) patched the widespread Apache server to similar effect, with both claiming that the headers do not represent the explicit choice of the user. These controversies leave the Federal Trade Commissioner-backed initiative on shaky ground.

HTTPS on Facebook

After several months of engineering effort, Facebook finally implemented “https://” as the default transmission protocol for all its users. This encrypted sibling of the http standard became the default on Gmail a number of years ago and on Twitter earlier in 2012. The change was introduced with the caveat that using the platform may be marginally slower and that non-secure third party applications would cease functioning until they became compliant. The move adds a layer of security to all transmissions on the platform, which before went largely unencrypted, to the chagrin of security watchdogs.

Facebook Tag Suggest

At the insistence of the Irish Data Protection Authority Facebook disabled its Tag Suggest feature for European users and deleted any facial recognition templates it had built based on their images. When the feature was initially released as a method to make tagging one’s friends more convenient, users were automatically enrolled into a facial recognition database without their express consent. This practice contravened a recent opinion by the EU Data Protection Working Party which stated that to enroll users in such a practice, a consent to the specific act of processing images for facial recognition was required.

Back to top

Conclusion

As individuals place more of their lives online, a growing range of data types, collection, aggregation and disclosure contexts, and policy items threaten to overwhelm users’ ability to rationally provide informed consent. Initiatives to strengthen regulatory control over electronic data access are seeing results, though it is likely that the tension between industry and government regulation will continue to keep users guessing about the degree to which their data are protected.

Back to top

Read previous editions of Social Media CyberWatch.

Bookmark and Share

Post a Comment

Your email is never shared. Required fields are marked *

*
*