Twitter lets advertisers “takeover” the Explore tab

Twitter is ready to squeeze a lot more money out of its trending topics. After minimizing its mediocre Moments feature and burying it inside the renamed Explore tab, Twitter is now starting to test Promoted Trend Spotlight ads. These put a big visual banner equipped with a GIF or image background atop Explore for the first two times you visit that day before settling back into the Trends list, with the first batch coming from Disney in the US.

These powerful new ad units demote organic content in Explore, which could make it less useful for getting a grip on what’s up in the world at a glance. But they could earn Twitter  strong revenue by being much more eye-catching than the traditional Timeline ads that people often skip past. That could further fuel Twitter’s turnaround after it soundly beat revenue estimates in Q1 with $665 million. Its share price of about $44 is near its 52-week high, and almost 3X its low for the year.

“We are continuing to explore new ways to enhance our takeover offerings and give brands more high-impact opportunities to drive conversation and brand awareness on our platform” a Twitter spokesperson told TechCrunch.

The Promoted Trend Spotlight ads are bought as an add-on to the existing Promoted Trends ads that are inserted amongst the list of Twitter’s most popular topics. When tapped, they open a feed of tweets with that headline with one of the advertiser’s related tweets at the top. Back in February AdAge reported whispers of a new visual redesign for Promoted Trends. You can view a demo of the experience below.

Anthy Price, Disney’s Executive Vice President for Media provided TechCrunch with a statement, saying “The Promoted Trend Spotlight on Twitter allowed us to prominently highlight Winnie the Pooh & celebrate the launch of ticket sales for Christopher Robin while four of the characters took over major Disney handles on the platform to engage with fans.”

Historically, Twitter’s biggest problem was that people skimmed past ads. The old unfiltered Timeline trained users to pick and choose what they read, looking past anything that didn’t seem relevant including paid marketing. But with the shift to an algorithmic Timeline and bigger focus on video, Twitter has slowly retrained users to expect relevant content in every slot. Explore’s design with imagery at the top followed by a text list of Trends pulls attention to where these new Spotlight ads sit. With better monetization, Twitter will now have to concentrate on building better ways to get users to open Explore instead of just their feed, notifications, and DMs.


Source: https://techcrunch.com/2018/07/11/twitter-promoted-trend-spotlight/

Advertisements

Facebook independent research commission ‘Social Science One’ will share a petabyte of user data

Back in April, Facebook announced that it would be working with a group of academics to establish an independent research commission to look into issues of social and political significance using the company’s own extensive data collection. That commission just came out of stealth; it’s called Social Science One, and its first project will have researchers analyzing about a petabyte’s worth of sharing data.

The way the commission works is basically that a group of academics is created and given full access to the processes and datasets that Facebook could potentially provide. They identify and help design interesting sets based on their experience as researchers themselves, then document them publicly — for instance, “this dataset consists of 10 million status updates taken during the week of the Brexit vote, structured in such and such a way.”

This documentation describing the set doubles as a “request for proposals” from the research community. Other researchers interested in the data propose analyses or experiments, which are evaluated by commission. These proposals are then granted (according to their merit) access to the data, funding, and other privileges. Resulting papers will be peer reviewed with help from the Social Science Research Council, and can be published without being approved (or even seen) by Facebook.

“The data collected by private companies has vast potential to help social scientists understand and solve society’s greatest challenges. But until now that data has typically been unavailable for academic research,” said Social Science One co-founder, Harvard’s Gary King, in a blog post announcing the initiative. “Social Science One has established an ethical structure for marshaling privacy preserving industry data for the greater social good while ensuring full academic publishing freedom.”

If you’re curious about the specifics of the partnership, it’s actually been described in a paper of its own, available here.

The first dataset is a juicy one: “almost all” public URLs shared and clicked by Facebook users globally, accompanied by a host of useful metadata.

It will contain “on the order of 2 million unique URLs shared in 300 million posts, per week,” reads a document describing the set. “We estimate that the data will contain on the order of 30 billion rows, translating to an effective raw size on the order of a petabyte.”

The metadata includes country, user age, device and so on, but also dozens of other items, such as “ideological affiliation bucket,” the proportion of friends vs. non-friends who viewed a post, feed position, the number of total shares, clicks, likes, hearts, flags… there’s going to be quite a lot to sort through. Naturally all this is carefully pruned to protect user privacy — this is a proper research dataset, not a Cambridge Analytica-style catch-all siphoned from the service.

In a call accompanying the announcement, King explained that the commission had much more data coming down the pipeline, with a focus on disinformation, polarization, election integrity, political advertising, and civic engagement.

“It really does get at some of the fundamental questions of social media and democracy,” King said on the call.

The other sets are in various stages of completeness or permission: post-election survey participants in Mexico and elsewhere are being asked if their responses can be connected with their Facebook profiles; the political ad archive will be formally made available; they’re working on something with CrowdTangle; there are various partnerships with other researchers and institutions around the world.

A “continuous feed of all public posts on Facebook and Instagram” and “a large random sample of Facebook newsfeeds” are also under consideration, probably encountering serious scrutiny and caveats from the company.

Of course quality research must be paid for, and it would be irresponsible not to note that Social Science One is funded not by Facebook but by a number of foundations: the Laura and John Arnold Foundation, The Democracy Fund, The William and Flora Hewlett Foundation, The John S. and James L. Knight Foundation, The Charles Koch Foundation, Omidyar Network’s Tech and Society Solutions Lab, and The Alfred P. Sloan Foundation.

You can keep up with the organization’s work here; it really is a promising endeavor and will almost certainly produce some interesting science — though not for some time. We’ll keep an eye out for any research emerging from the partnership.


Source: https://techcrunch.com/2018/07/11/facebook-independent-research-commission-social-science-one-will-share-a-petabyte-of-user-data/

Hold for the drop: Twitter to purge locked accounts from follower metrics

Twitter is making a major change aimed at cleaning up the spammy legacy of its platform.

This week it will globally purge accounts it has previously locked (i.e. after suspecting them of being spammy) — by removing the accounts from users’ follower metrics.

Which in plain language means Twitter users with lots of followers are likely to see their follower counts take a noticeable hit in the coming days. So hold tight for the drop.

Late last month Twitter flagged smaller changes to follower counts, also as part of a series of platform-purging anti-spam measures — warning users they might see their counts fluctuate more as counts had been switched to being displayed in near real-time (in that case to try to prevent spambots and follow scams artificially inflating account metrics).

But the global purge of locked accounts from user account metrics looks like it’s going to be a rather bigger deal, putting some major dents in certain high profile users’ follower counts — and some major dents in celeb egos.

Hence Twitter has blogged again. “Follower counts are a visible feature, and we want everyone to have confidence that the numbers are meaningful and accurate,” writes Twitter’s Vijaya Gadde, legal, policy and trust & safety lead, flagging the latest change.

It will certainly be interesting to see whether the change substantially dents Twitter follower counts of high profiles users — such as Katy Perry (109,609,073 Twitter followers at the time of writing) Donald Trump (53,379,873); Taylor Swift (85,566,010); Elon Musk (22,329,075); and Beyoncé (15,303,191), to name a few of the platform’s most followed users.

Check back in a week to see how their follower counts look.

“Most people will see a change of four followers or fewer; others with larger follower counts will experience a more significant drop,” warns Gadde, adding: “We understand this may be hard for some, but we believe accuracy and transparency make Twitter a more trusted service for public conversation.”

Twitter is also warning that while “the most significant changes” will happen in the next few days, users’ follower counts “may continue to change more regularly as part of our ongoing work to proactively identify and challenge problematic accounts”.

The company says it locks accounts if it detects sudden changes in account behavior — such as tweeting “a large volume of unsolicited replies or mentions, Tweeting misleading links, or if a large number of accounts block the account after mentioning them” — which therefore may indicate an account has been hacked/taken over by a spambot.

It says it may also lock accounts if we see email and password combinations from other services posted online and believe that information could put the security of an account at risk.

After locking an account Twitter contacts the owner to try to confirm they still have control of the account. If the owner does not reply to confirm the account stays locked — and will soon also be removed from follower counts globally.

Twitter emphasizes that locked accounts already cannot Tweet, like or Retweet, and are not served ads. But removing them from follower counts is an important additional step that it’s great to see Twitter making — albeit at long last

https://platform.twitter.com/widgets.js

Twitter also specifies that locked accounts that have not reset their password in more than one month were already not included in Twitter’s MAU or DAU counts — so it today reiterates the CFO’s recent message, saying this change won’t affect its own platform usage metrics. 

The company has been going through what — this time — looks to be a serious house-cleaning process for some months now, after years and years of criticism for failing to tackle rampant spam and abuse on its platform.

In March, Twitter CEO Jack Dorsey also put out a call for ideas to help it capture, measure and evaluate healthy interactions on its platform and the health of public conversations generally — saying: “Ultimately we want to have a measurement of how it affects the broader society and public health, but also individual health, as well.”


Source: https://techcrunch.com/2018/07/11/hold-for-the-drop-twitter-to-purge-locked-accounts-from-follower-metrics/

Timehop admits that additional personal data was compromised in breach

Timehop is admitting that additional personal information was compromised in a data breach on July 4.

The company first acknowledged the breach on Sunday, saying that users’ names, email addresses and phone numbers had been compromised. Today it said it that additional information, including date of birth and gender, was also taken.

To understand what happened, and what Timehop is doing to fix things, I spoke to CEO Matt Raoul, COO Rick Webb and the security consultant that the company hired to manage its response. (The security consultant agreed to be interviewed on-the-record on the condition that they not be named.)

To be clear, Timehop isn’t saying that there was a separate breach of its data. Instead, the team has discovered that more data was taken in the already-announced incident.

Why didn’t they figure that out sooner? In an updated version of its report (which was also emailed to customers), the company put it simply: “Because we messed up.” It goes on:

In our enthusiasm to disclose all we knew, we quite simply made our announcement before we knew everything. With the benefit of staff who had been vacationing and unavailable during the first four days of the investigation, and a new senior engineering employee, as we examined the more comprehensive audit on Monday of the actual database tables that were stolen it became clear that there was more information in the tables than we had originally disclosed. This was precisely why we had stated repeatedly that the investigation was continuing and that we would update with more information as soon as it became available.

In both the email and my interviews, the Timehop team noted that the service does not have any financial information from users, nor does it perform the kinds of detailed behavioral tracking that you might expect from an ad-supported service. The team also emphasized that users’ “memories” — namely, the older social media posts that people use Timehop to rediscover — were not compromised.

How can they be sure, particularly since some of the compromised data was overlooked in the initial announcement? Well, the breach affected one specific database, while the memories are stored separately.

“That stuff is what we cared about, that stuff was protected,” Webb said. The challenge is, “We have to make a mental note to think about everything else.”

Timehop team

The breach occurred when someone accessed a database in Timehop’s cloud infrastructure that was not protected by two-factor authentication, though Raoul insisted that the company was already using two-factor quite broadly — it’s just that this “fell through the cracks.”

It’s also worth noting that while 21 million accounts were affected, Timehop had varying amounts of data about different users. For example, it says that 18.6 million email addresses were compromised (down from the “up to 21 million” addresses first reported), compared to 15.5 million dates of birth. In total, the company says 3.3 million records were compromised that included names, email addresses, phone numbers and DOBs.

None of those things may seem terribly sensitive (anyone with a copy of my business card and access to Google could probably get that information about me), but the security consultant acknowledged that in the “very, very small percentage” of cases where the records included full names, email addresses, phone numbers and DOBs, “identity theft becomes more likely,” and he suggested that users take standard steps to protect themselves, including password-protecting their phones.

Meanwhile, the company says that it worked with the social media platforms to detect activity that used the compromised authorization tokens, and it has not found anything suspicious. At this point, all of the tokens have been deauthorized (requiring users to re-authorize all of their accounts), so it shouldn’t be an ongoing issue.

As for other steps Timehop is taking to prevent future breaches, the security consultant told me the company is already in the process of ensuring that two-factor authentication is adopted across the board and encrypting its databases, as well as improving the process of deploying code to address security issues.

In addition, the company has shared the IP addresses used in the attack with law enforcement, and it will be sharing its “indicators of compromise” with partners in the security community.

Timehop screenshot

Everyone acknowledged that Timehop made real mistakes, both in its security and in the initial communication with customers. (As the consultant put it, “They made a schoolboy mistake by not doing two-factor authentication.”) However, they also suggested that their response was guided, in part, by the accelerated disclosure timeline required by Europe’s GDPR regulations.

The security consultant told me, “We haven’t had the time fine-toothed comb kinds of things we normally want to do,” like an in-depth forensic analysis. Those things will happen, he said — but thanks to GDPR, the company needed to make the announcement before it had all the information.

And overall, the consultant said he’s been impressed by Timehop’s response.

“I think it really says a lot to their integrity that they decided to go fully public the second they knew it was a breach,” he said. “I want to point out these guys responded within 24 hours with a full-on incident response and secured their environments. That’s better than so many companies.”


Source: https://techcrunch.com/2018/07/11/timehop-data-breach/

Facebook under fresh political pressure as UK watchdog calls for “ethical pause” of ad ops

The UK’s privacy watchdog revealed yesterday that it intends to fine Facebook the maximum possible (£500k) under the country’s 1998 data protection regime for breaches related to the Cambridge Analytica data misuse scandal.

But that’s just the tip of the regulatory missiles now being directed at the platform and its ad-targeting methods — and indeed, at the wider big data economy’s corrosive undermining of individuals’ rights.

Alongside yesterday’s update on its investigation into the Facebook-Cambridge Analytica data scandal, the Information Commissioner’s Office (ICO) has published a policy report — entitled Democracy Disrupted? Personal information and political influence — in which it sets out a series of policy recommendations related to how personal information is used in modern political campaigns.

In the report it calls directly for an “ethical pause” around the use of microtargeting ad tools for political campaigning — to “allow the key players — government, parliament, regulators, political parties, online platforms and citizens — to reflect on their responsibilities in respect of the use of personal information in the era of big data before there is a greater expansion in the use of new technologies”.

The watchdog writes [emphasis ours]:

Rapid social and technological developments in the use of big data mean that there is limited knowledge of – or transparency around – the ‘behind the scenes’ data processing techniques (including algorithms, analysis, data matching and profiling) being used by organisations and businesses to micro-target individuals. What is clear is that these tools can have a significant impact on people’s privacy. It is important that there is greater and genuine transparency about the use of such techniques to ensure that people have control over their own data and that the law is upheld. When the purpose for using these techniques is related to the democratic process, the case for high standards of transparency is very strong.

Engagement with the electorate is vital to the democratic process; it is therefore understandable that political campaigns are exploring the potential of advanced data analysis tools to help win votes. The public have the right to expect that this takes place in accordance with the law as it relates to data protection and electronic marketing. Without a high level of transparency – and therefore trust amongst citizens that their data is being used appropriately – we are at risk of developing a system of voter surveillance by default. This could have a damaging long-term effect on the fabric of our democracy and political life.

It also flags a number of specific concerns attached to Facebook’s platform and its impact upon people’s rights and democratic processes — some of which are sparking fresh regulatory investigations into the company’s business practices.

“A significant finding of the ICO investigation is the conclusion that Facebook has not been sufficiently transparent to enable users to understand how and why they might be targeted by a political party or campaign,” it writes. “Whilst these concerns about Facebook’s advertising model exist generally in relation to its commercial use, they are heightened when these tools are used for political campaigning. Facebook’s use of relevant interest categories for targeted advertising and it’s, Partner Categories Service are also cause for concern. Although the service has ceased in the EU, the ICO will be looking into both of these areas, and in the case of partner categories, commencing a new, broader investigation.”

The ICO says its discussions with Facebook for this report focused on “the level of transparency around how Facebook user data and third party data is being used to target users, and the controls available to users over the adverts they see”.

Among the concerns it raises about what it dubs Facebook’s “very complex” online targeting advertising model are [emphasis ours]:

Our investigation found significant fair-processing concerns both in terms of the information available to users about the sources of the data that are being used to determine what adverts they see and the nature of the profiling taking place. There were further concerns about the availability and transparency of the controls offered to users over what ads and messages they receive. The controls were difficult to find and were not intuitive to the user if they wanted to control the political advertising they received. Whilst users were informed that their data would be used for commercial advertising, it was not clear that political advertising would take place on the platform.

The ICO also found that despite a significant amount of privacy information and controls being made available, overall they did not effectively inform the users about the likely uses of their personal information. In particular, more explicit information should have been made available at the first layer of the privacy policy. The user tools available to block or remove ads were also complex and not clearly available to users from the core pages they would be accessing. The controls were also limited in relation to political advertising.

The company has been criticized for years for confusing and complex privacy controls. But during the investigation, the ICO says it was also not provided with “satisfactory information” from the company to understand the process it uses for determining what interest segments individuals are placed in for ad targeting purposes.

“Whilst Facebook confirmed that the content of users’ posts were not used to derive categories or target ads, it was difficult to understand how the different ‘signals’, as Facebook called them, built up to place individuals into categories,” it writes.

Similar complaints of foot-dragging responses to information requests related to political ads on its platform have also been directed at Facebook by a parliamentary committee that’s running an inquiry into fake news and online disinformation — and in April the chair of the committee accused Facebook of “a pattern of evasive behavior”.

So the ICO is not alone in feeling that Facebook’s responses to requests for specific information have lacked the specific information being sought. (CEO Mark Zuckerberg also annoyed the European Parliament with highly evasive responses to their highly detailed questions this Spring.)

Meanwhile, a European media investigation in May found that Facebook’s platform allows advertisers to target individuals based on interests related to sensitive categories such as political beliefs, sexuality and religion — which are categories that are marked out as sensitive information under regional data protection law, suggesting such targeting is legally problematic.

The investigation found that Facebook’s platform enables this type of ad targeting in the EU by making sensitive inferences about users — inferred interests including communism, social democrats, Hinduism and Christianity. And its defense against charges that what it’s doing breaks regional law is that inferred interests are not personal data.

However the ICO report sends a very chill wind rattling towards that fig leaf, noting “there is a concern that by placing users into categories, Facebook have been processing sensitive personal information – and, in particular, data about political opinions”.

It further writes [emphasis ours]:

Facebook made clear to the ICO that it does ‘not target advertising to EU users on the basis of sensitive personal data’… The ICO accepts that indicating a person is interested in a topic is not the same as formally placing them within a special personal information category. However, a risk clearly exists that advertisers will use core audience categories in a way that does seek to target individuals based on sensitive personal information. In the context of this investigation, the ICO is particularly concerned that such categories can be used for political advertising.

The ICO believes that this is part of a broader issue about the processing of personal information by online platforms in the use of targeted advertising; this goes beyond political advertising. It is clear from academic research conducted by the University of Madrid on this topic that a significant privacy risk can arise. For example, advertisers were using these categories to target individuals with the assumption that they are, for example, homosexual. Therefore, the effect was that individuals were being singled out and targeted on the basis of their sexuality. This is deeply concerning, and it is the ICO’s intention as a concerned authority under the GDPR to work via the one-stop-shop system with the Irish Data Protection Commission to see if there is scope to undertake a wider examination of online platforms’ use of special categories of data in their targeted advertising models.

So, essentially, the regulator is saying it will work with other EU data protection authorities to push for a wider, structural investigation of online ad targeting platforms which put users into categories based on inferred interests — and certainly where those platforms are allowing targeting against special categories of data (such as data related to racial or ethnic origin, political opinions, religious beliefs, health data, sexuality).

Another concern the ICO raises that’s specifically attached to Facebook’s business is transparency around its so-called “partner categories” service — an option for advertisers that allows them to use third party data (i.e. personal data collected by third party data brokers) to create custom audiences on its platform.

In March, ahead of a major update to the EU’s data protection framework, Facebook announced it would be “winding down” this service down over the next six months.

But the ICO is going to investigate it anyway.

“A preliminary investigation of the service has raised significant concerns about transparency of use of the [partner categories] service for political advertising and wider concerns about the legal basis for the service, including Facebook’s claim that it is acting only as a processor for the third-party data providers,” it writes. “Facebook announced in March 2018 that it will be winding down this service over a six-month period, and we understand that it has already ceased in the EU. The ICO has also commenced a broader investigation into the service under the DPA 1998 (which will be concluded at a later date) as we believe it is in the public interest to do so.”

In conclusion on Facebook the regulator asserts the company has not been “sufficiently transparent to enable users to understand how and why they might be targeted by a political party or campaign”.

“Individuals can opt out of particular interests, and that is likely to reduce the number of ads they receive on political issues, but it will not completely block them,” it points out. “These concerns about transparency lie at the core of our investigation. Whilst these concerns about Facebook’s advertising model exist in relation in general terms and its use in the commercial sphere, the concerns are heightened when these tools are used for political campaigning.”

The regulator also looked at political campaign use of three other online ad platforms — Google, Twitter and Snapchat — although Facebook gets the lion’s share of its attention in the report given the platform has also attracted the lion’s share of UK political parties’ digital spending. (“Figures from the Electoral Commission show that the political parties spent £3.2 million on direct Facebook advertising during the 2017 general election,” it notes. “This was up from £1.3 million during the 2015 general election. By contrast, the political parties spent £1 million on Google advertising.”)

The ICO is recommending that all online platforms which provide advertising services to political parties and campaigns should include experts within the sales support team who can provide political parties and campaigns with “specific advice on transparency and accountability in relation to how data is used to target users”.

“Social media companies have a responsibility to act as information fiduciaries, as citizens increasingly live their lives online,” it further writes.

It also says it will work with the European Data Protection Board, and the relevant lead data protection authorities in the region, to ensure that online platforms comply with the EU’s new data protection framework (GDPR) — and specifically to ensure that users “understand how personal information is processed in the targeted advertising model, and that effective controls are available”.

“This includes greater transparency in relation to the privacy settings, and the design and prominence of privacy notices,” it warns.

Facebook’s use of dark pattern design and A/B tested social engineering to obtain user consent for processing their data at the same time as obfuscating its intentions for people’s data has been a long-standing criticism of the company — but one which the ICO is here signaling is very much on the regulatory radar in the EU.

So expecting new laws — as well as lots more GDPR lawsuits — seems prudent.

The regulator is also pushing for all four online platforms to “urgently roll out planned transparency features in relation to political advertising to the UK” — in consultation with both relevant domestic oversight bodies (the ICO and the Electoral Commission).

In Facebook’s case, it has been developing policies around political ad transparency — amid a series of related data scandals in recent years, which have ramped up political pressure on the company. But self-regulation looks very unlikely to go far enough (or fast enough) to fix the real risks now being raised at the highest political levels.

“We opened this report by asking whether democracy has been disrupted by the use of data analytics and new technologies. Throughout this investigation, we have seen evidence that it is beginning to have a profound effect whereby information asymmetry between different groups of voters is beginning to emerge,” writes the ICO. “We are a now at a crucial juncture where trust and confidence in the integrity of our democratic process risks being undermined if an ethical pause is not taken. The recommendations made in this report — if effectively implemented — will change the behaviour and compliance of all the actors in the political campaigning space.”

Another key policy recommendation the ICO is making is to urge the UK government to legislate “at the earliest opportunity” to introduce a statutory Code of Practice under the country’s new data protection law for the use of personal information in political campaigns.

The report also essentially calls out all the UK’s political parties for data protection failures — a universal problem that’s very evidently being supercharged by the rise of accessible and powerful online platforms which have enabled political parties to combine (and thus enrich) voter databases they are legally entitled to with all sorts of additional online intelligence that’s been harvested by the likes of Facebook and other major data brokers.

Hence the ICO’s concern about “developing a system of voter surveillance by default”. And why she’s pushing for online platforms to “act as information fiduciaries”.

Or, in other words, without exercising great responsibility around people’s information, online ad platforms like Facebook risk becoming the enabling layer that breaks democracy and shatters civic society.

Particular concerns being attached by the ICO to political parties’ activities include: The purchasing of marketing lists and lifestyle information from data brokers without sufficient due diligence; a lack of fair processing; and use of third party data analytics companies with insufficient checks around consent. And the regulator says it has several related investigations ongoing.

In March, the information commissioner, Elizabeth Denham, foreshadowed the conclusions in this report, telling a UK parliamentary committee she would be recommending a code of conduct for political use of personal data, and pushing for increased transparency around how and where people’s data is flowing — telling MPs: “We need information that is transparent, otherwise we will push people into little filter bubbles, where they have no idea about what other people are saying and what the other side of the campaign is saying. We want to make sure that social media is used well.”

The ICO says now that it will work closely with government to determine the scope of the Code. It also wants the government to conduct a review of regulatory gaps.

We’ve reached out to the Cabinet Office for a government response to the ICO’s recommendations. Update: A Cabinet Office spokesperson directed us to the Department for Digital, Culture, Media and Sport — and a DCMS spokesman told us the government will wait to review the full ICO report once it’s completed before setting out a formal response.

A Facebook spokesman declined to answer specific questions related to the report — instead sending us this short statement, attributed to its chief privacy officer, Erin Egan: “As we have said before, we should have done more to investigate claims about Cambridge Analytica and take action in 2015. We have been working closely with the ICO in their investigation of Cambridge Analytica, just as we have with authorities in the US and other countries. We’re reviewing the report and will respond to the ICO soon.”

Here’s the ICO’s summary of its ten policy recommendations:

1) The political parties must work with the ICO, the Cabinet Office and the Electoral Commission to identify and implement a cross-party solution to improve transparency around the use of commonly held data.

2) The ICO will work with the Electoral Commission, Cabinet Office and the political parties to launch a version of its successful Your Data Matters campaign before the next General Election. The aim will be to increase transparency and build trust and confidence amongst 5 the electora
Source: https://techcrunch.com/2018/07/11/facebook-under-fresh-political-pressure-as-uk-watchdog-calls-for-ethical-pause-of-ad-ops/

UK’s Information Commissioner will fine Facebook the maximum £500K over Cambridge Analytica breach

Facebook continues to face fallout over the Cambridge Analytica scandal, which revealed how user data was stealthily obtained by way of quizzes and then appropriated for other purposes, such as targeted political advertising. Today, the U.K. Information Commissioner’s Office (ICO) announced that it would be issuing the social network with its maximum fine, £500,000 ($662,000) after it concluded that it “contravened the law” — specifically the 1998 Data Protection Act — “by failing to safeguard people’s information.”

The ICO is clear that Facebook effectively broke the law by failing to keep users data safe, when their systems allowed Dr Aleksandr Kogan, who developed an app, called “This is your digital life” on behalf of Cambridge Analytica, to scrape the data of up to 87 million Facebook users. This included accessing all of the friends data of the individual accounts that had engaged with Dr Kogan’s app.

The ICO’s inquiry first started in May 2017 in the wake of the Brexit vote and questions over how parties could have manipulated the outcome using targeted digital campaigns.

Damian Collins, the MP who is the chair of the Digital, Culture, Media and Sport Committee that has been undertaking the investigation, has as a result of this said that the DCMS will now demand more information from Facebook, including which other apps might have also been involved, or used in a similar way by others, as well as what potential links all of this activity might have had to Russia. He’s also gearing up to demand a full, independent investigation of the company, rather than the internal audit that Facebook so far has provided. A full statement from Collins is below.

The fine, and the follow-up questions that U.K. government officials are now asking, are a signal that Facebook — after months of grilling on both sides of the Atlantic amid a wider investigation — is not yet off the hook in the U.K. This will come as good news to those who watched the hearings (and non-hearings) in Washington, London and European Parliament and felt that Facebook and others walked away relatively unscathed. The reverberations are also being felt in other parts of the world. In Australia, a group earlier today announced that it was forming a class action lawsuit against Facebook for breaching data privacy as well. (Australia has also been conducting a probe into the scandal.)

The ICO also put forward three questions alongside its announcement of the fine, which it will now be seeking answers to from Facebook. In its own words:

  1. Who had access to the Facebook data scraped by Dr Kogan, or any data sets derived from it?
  2. Given Dr Kogan also worked on a project commissioned by the Russian Government through the University of St Petersburg, did anyone in Russia ever have access to this data or data sets derived from it?
  3. Did organisations who benefited from the scraped data fail to delete it when asked to by Facebook, and if so where is it now?

The DCMS committee has been conducting a wider investigation into disinformation and data use in political campaigns and it plans to publish an interim report on it later this month.

Collins’ full statement:

Given that the ICO is saying that Facebook broke the law, it is essential that we now know which other apps that ran on their platform may have scraped data in a similar way. This cannot by left to a secret internal investigation at Facebook. If other developers broke the law we have a right to know, and the users whose data may have been compromised in this way should be informed.

Facebook users will be rightly concerned that the company left their data far too vulnerable to being collected without their consent by developers working on behalf of companies like Cambridge Analytica. The number of Facebook users affected by this kind of data scraping may be far greater than has currently been acknowledged. Facebook should now make the results of their internal investigations known to the ICO, our committee and other relevant investigatory authorities.

Facebook state that they only knew about this data breach when it was first reported in the press in December 2015. The company has consistently failed to answer the questions from our committee as to who at Facebook was informed about it. They say that Mark Zuckerberg did not know about it until it was reported in the press this year. In which case, given that it concerns a breach of the law, they should state who was the most senior person in the company to know, why they decided people like Mark Zuckerberg didn’t need to know, and why they didn’t inform users at the time about the data breach. Facebook need to provide answers on these important points. These important issues would have remained hidden, were it not for people speaking out about them. Facebook’s response during our inquiry has been consistently slow and unsatisfactory.

The receivers of SCL elections should comply with the law and respond to the enforcement notice issued by the ICO. It is also disturbing that AIQ have failed to comply with their enforcement notice.

Facebook has been in the crosshairs of the ICO over other data protection issues, and not come out well.


Source: https://techcrunch.com/2018/07/10/uks-information-commissioner-will-fine-facebook-the-maximum-500k-over-cambridge-analytica-breach/

Your Red-Tape Toolkit: 7 Ways to Earn Trust and Get Your Search Work Implemented

Posted by HeatherPhysioc

Tell me if this rings a bell. Are your search recommendations overlooked and misunderstood? Do you feel like you hit roadblocks at every turn? Are you worried that people don’t understand the value of your work?

I had an eye-opening moment when my colleague David Mitchell, Chief Technology Officer at VML, said to me, “You know the best creatives here aren’t the ones who are the best artists — they’re the ones who are best at talking about the work.”

I have found that the same holds true in search. As an industry, we are great at talking about the work — we’re fabulous about sharing technical knowledge and new developments in search. But we’re not so great at talking about how we talk about the work. And that can make all the difference between our work getting implementing and achieving great results, or languishing in a backlog.

It’s so important to learn how to navigate corporate bureaucracy and cut through red tape to help your clients and colleagues understand your search work — and actually get it implemented. From diagnosing client maturity to communicating where search fits into the big picture, the tools I share in this article can help equip you to overcome obstacles to doing your best work.

Buying Your Services ≠ Buying In

Just because a client signed a contract with you does not mean they are bought-in to implement every change you recommend. It seemingly defies all logic that someone would agree that they need organic search help enough to sign a contract and pay you to make recommendations, only for the recommendations to never go live.

When I was an independent contractor serving small businesses, they were often overwhelmed by their marketing and willing to hand over the keys to the website so my developers could implement SEO recommendations.

Then, as I got into agency life and worked on larger and larger businesses, I quickly realized it was a lot harder to get SEO work implemented. I started hitting roadblocks with a number of clients, and it was a slow, arduous process to get even small projects pushed through. It was easy to get impatient and fed up.

Worse, it was hard for some of my team members to see their colleagues getting great search work implemented and earning awesome results for their clients, while their own clients couldn’t seem to get anything implemented. It left them frustrated, jaded, feeling inadequate, and burned out — all the while the client was asking where the results were for the projects they didn’t implement.

What Stands in the Way of Getting Your Work Implemented

I surveyed colleagues in our industry about the common challenges they experience when trying to get their recommendations implemented. (Thank you to the 141 people who submitted!) The results were roughly one-third in-house marketers and two-thirds external marketers providing services to clients.

The most common obstacles we asked about fell into a few main categories:

  • Low Understanding of Search
    • Client Understanding
    • Peer/Colleague Understanding
    • Boss Understanding
  • Prioritization & Buy-In
    • Low Prioritization of Search Work
    • External Buy-In from Clients
    • Internal Buy-In from Peers
    • Internal Buy-In from Bosses
    • Past Unsuccessful Projects or Mistakes
  • Corporate Bureaucracy
    • Red Tape and Slow Approvals
    • No Advocate or Champion for Search
    • Turnover or Personnel Changes (Client-Side)
    • Difficult or Hostile Client
  • Resource Limitations
    • Technical Resources for Developers / Full Backlog
    • Budget / Scope Too Low to Make Impact
    • Technical Limitations of Digital Platform

The chart below shows how the obstacles in the survey stacked up. Higher scores mean people reported it as a more frequent or common problem they experience:

Some participants also wrote in additional blocks they’ve encountered – everything from bottlenecks in the workflow to over-complicated processes, lack of ownership to internal politics, shifting budgets to shifting priorities.

Too real? Are you completely bummed out yet? There is clearly no shortage of things that can stand in the way of SEO progress, and likely our work as marketers will never be without challenges.

Playing the Blame Game

When things don’t go our way and our work gets intercepted or lost before it ever goes live, we tend to be quick to blame clients. It’s the client’s fault things are hung up, or if the client had only listened to us, and the client’s business is the problem.

But I don’t buy it.

Don’t get me wrong — this could possibly be true in part in some cases, but rarely is it the whole story and rarely are we completely hopeless to affect change. Sometimes the problem is the system, sometimes the problem is the people, and my friends, sometimes the problem is you.

But fortunately, we are all optimizers — we all inherently believe that things could be just a little bit better.

These are the tools you need in your belt to face many of the common obstacles to implementing your best search work.

7 Techniques to Get Your Search Work Approved & Implemented

When we enter the world of search, we are instantly trained on how to execute the work – not the soft skills needed to sustain and grow the work, break down barriers, get buy-in and get stuff implemented. These soft skills are critical to maximize your search success for clients, and can lead to more fruitful, long-lasting relationships.

Below are seven of the most highly recommended skills and techniques, from the SEO professionals surveyed and my own experience, to learn in order to increase the likelihood your work will get implemented by your clients.

1. How Mature Is Your Client?

Challenges to implementation tend to be organizational, people, integration, and process problems. Conducting a search maturity assessment with your client can be eye-opening to what needs to be solved internally before great search work can be implemented. Pairing a search capabilities model with an organizational maturity model gives you a wealth of knowledge and tools to help your client.

I recently wrote an in-depth article for the Moz blog about how to diagnose your client’s search maturity in both technical SEO capabilities and their organizational maturity as it pertains to a search program.

For search, we can think about a maturity model two ways. One may be the actual technical implementation of search best practices — is the client implementing exceptional, advanced SEO, just the basics, nothing at all, or even operating counterproductively? This helps identify what kinds of project make sense to start with for your client. Here is a sample maturity model across several aspects of search that you can use or modify for your purposes:

This SEO capabilities maturity model only starts to solve for what you should implement, but doesn’t get to the heart of why it’s so hard to get your work implemented. The real problems are a lot more nuanced, and aren’t as easy as checking the boxes of “best practices SEO.”

We also need to diagnose the organizational maturity of the client as it pertains to building, using and evolving an organic search practice. We have to understand the assets and obstacles of our client’s organization that either aid or block the implementation of our recommendations in order to move the ball forward.

If, after conducting these maturity model exercises, we find that a client has extremely limited personnel, budget and capacity to complete the work, that’s the first problem we should focus on solving for — helping them allocate proper resources and prioritization to the work.

If we find that they have plenty of personnel, budget, and capacity, but have no discernible, repeatable process for integrating search into their marketing mix, we focus our efforts there. How can we help them define, implement, and continually evolve processes that work for them and with the agency?

Perhaps the maturity assessment finds that they are adequate in most categories, but struggle with being reactive and implementing retrofitted SEO only as an afterthought, we may help them investigate their actionable workflows and connect dots across departments. How can we insert organic search expertise in the right ways at the right moments to have the greatest impact?

2. Speak to CEOs and CMOs, Not SEOs

Because we are subject matter experts in search, we are responsible for educating clients and colleagues on the power of SEO and the impact it can have on brands. If the executives are skeptical or don’t care about search, it won’t happen. If you want to educate and inspire people, you can’t waste time losing them in the details.

Speak Their Language

Tailor your educational content to busy CEOs and CMOs, not SEOs. Make the effort to listen to, read, write, and speak their corporate language. Their jargon is return on investment, earnings per share, operational costs. Yours is canonicalization, HTTPS and SSL encryption, 302 redirects, and 301 redirect chains.

Be mindful that you are coming from different places and meet them in the middle. Use layperson’s terms that anyone can understand, not technical jargon, when explaining search.

Don’t be afraid to use analogies (i.e. instead of “implement permanent 301 redirect rewrite rules in the .htaccess file to correct 404 not found errors,” perhaps “it’s like forwarding your mail when you change addresses.”)

Get Out of the Weeds

Perhaps because we are so passionate about the inner workings of search, we often get deep into the weeds of explaining how every SEO signal works. Even things that seem not-so-technical to us (title tags and meta description tags, for example) can lose your audience’s attention in a heartbeat. Unless you know that the client is a technical mind who loves to get in the weeds or that they have search experience, stay at 30,000 feet.

Another powerful tool here is to show, not tell. Often you can tell a much more effective and hard-hitting story using images or smart data visualization. Your audience being able to see instead of trying to listen and decipher what you’re proposing can allow you to communicate complex information much more succinctly.

Focus on Outcomes

The goal of educating is not teaching peers and clients how to do search. They pay you to know that. Focus on the things that actually matter to your audience. (Come on, we’re inbound marketers — we should know this!) For many brands, that may include benefits like how it will build their brand visibility, how they can conquest competitors, and how they can make more money. Focus on the outcomes and benefits, not the granular, technical steps of how to get there.

What’s In It for Them?

Similarly, if you are doing a roadshow to educate your peers in other disciplines and get their buy-in, don’t focus on teaching them everything you know. Focus on how your work can benefit them (make their work smarter, more visible, make them more money) rather than demanding what other departments need to do for you. Aim to align on when, where, and how your two teams intersect to get greater results together.

3. SEO is Not the Center of the Universe

It was a tough pill for me to swallow when I realized that my clients simply didn’t care as much about organic search as my team and I did. (I mean, honestly, who isn’t passionate about dedicating their careers to understanding human thinking and behavior when we search, then optimizing technical stuff and website content for those humans to find it?!)

Bigger Fish to Fry

While clients may honestly love the sound of things we can do for them with search, rarely is SEO the only thing — or even a sizable thing — on a client’s mind. Rarely is our primary client contact someone who is exclusively dedicated to search, and typically, not even exclusively to digital marketing. We frequently report to digital directors and CMOs who have many more and much bigger fish to fry.

They have to look at the big picture and understand how the entire marketing mix works, and in reality, SEO is only one small part of that. While organic search is typically a client’s biggest source of traffic to their website, we often forget that the website isn’t even at the top of the priorities list for many clients. Our clients are thinking about the whole brand and the entirety of its marketing performance, or the organizational challenges they need to overcome to grow their business. SEO is just one small piece of that.

Acknowledge the Opportunity Cost

The benefits of search are no-brainers for us and it seems so obvious, but we fail to acknowledge that every decision a CMO makes has a risk, time commitment, risks and costs associated with it. Every time they invest in something for search, it is an opportunity cost for another marketing initiative. We fail to take the time to understand all the competing priorities and things that a client has to choose between with a limited budget.

To persuade them to choose an organic search project over something else — like a paid search, creative, paid media, email, or other play — we had better make a damn good case to justify not just the hard cost in dollars, but the opportunity cost to other marketing initiatives. (More on that later.)

Integrated Marketing Efforts

More and more, brands are moving to integrated agency models in hopes of getting more bang for their buck by maximizing the impact of every single campaign across channels working together, side-by-side. Until we start to think more about how SEO ladders up to the big picture and works alongside or supports larger marketing initiatives and brand goals, we will continue to hamstring ourselves when we propose ideas to clients.

It’s our responsibility to seek big-picture perspective and figure out where we fit. We have to understand the realities of a client’s internal and external processes, their larger marketing mix and SEO’s role in that. SEO experts tend to obsess over rankings and website traffic. But we should be making organic search recommendations within the context of their goals and priorities — not what we think their goals and priorities should be.

For example, we have worked on a large CPG food brand for several years. In year one, my colleagues did great discovery works and put together an awesome SEO playbook, and we spent most of the year trying to get integrated and trying to check all these SEO best practices boxes for the client. But no one cared and nothing was getting implemented. It turned out that our “SEO best practices” didn’t seem relevant to the bigger picture initiatives and brand campaigns they had planned for the year, so they were being deprioritized or ignored entirely. In year two, our contract was restructured to focus search efforts primarily on the planned campaigns for the year. Were we doing the search work we thought we would be doing for the client? No. Are we being included more and getting great search work implemented finally? Yes. Because we stopped trying to veer off in our own direction and started pulling the weight alongside everyone else toward a common vision.

4. Don’t Stay in Your Lane, Get Buy-In Across Lanes

Few brands hire only SEO experts and no other marketing services to drive their business. They have to coordinate a lot of moving pieces to drive all of them forward in the same direction as best they can. In order to do that, everyone has to be aligned on where we’re headed and the problems we’re solving for.

Ultimately, for most SEOs, this is about having the wisdom and humility to realize that you’re not in this alone – you can’t be. And even if you don’t get your way 100% of the time, you’re a lot more likely to get your way more of the time when you collaborate with others and ladder your efforts up to the big picture.

One of my survey respondents phrased it beautifully: “Treat all search projects as products that require a complete product team including engineering, project manager, and business-side folks.”

Horizontal Buy-In

You need buy-in across practices in your own agency (or combination of agencies serving the client and internal client team members helping execute the work). We have to stop swimming in entirely separate lanes where SEO is setting goals by themselves and not aligning to the larger business initiatives and marketing channels. We are all in this together to help the client solve for something. We have to learn to better communicate the value of search as it aligns to larger business initiatives, not in a separate swim lane.

Organic Search is uniquely dependent in that we often rely on others to get our work implemented. You can’t operate entirely separately from the analytics experts, developers, user experience designers, social media, paid search, and so on — especially when they’re all working together toward a common goal on behalf of the client.

Vertical Buy-In

To get buy-in for implementing your work, you need buy-in beyond your immediate client contact. You need buy-in top-to-bottom in the client’s organization — it has to support what the C-level executive cares about as much as your day-to-day contacts or their direct reports.

This can be especially helpful when you started within the agency — selling the value of the idea and getting the buy-in of your colleagues first. It forces you to vet and strengthen your idea, helps find blind spots, and craft the pitch for the client. Then, bring those important people to the table with the client — it gives you strength in numbers and expertise to have the developer, user experience designer, client engagement lead, and data analyst on the project in your corner validating the recommendation.

When you get to the client, it is so important to help them understand the benefits and outcomes of doing the project, the cost (and opportunity cost) of doing it, and how this can get them results toward their big picture goals. Understand their role in it and give them a voice, and make them the hero for approving it. If you have to pitch the idea at multiple levels, custom tailor your approach to speak to the client-side team members who will be helping you implement the work differently from how you would speak to the CMO who decides whether your project lives or dies.

5. Build a Bulletproof Plan

Here’s how a typical SEO project is proposed to a client: “You should do this SEO project because SEO.”

This explanation is not good enough, and they don’t care. You need to know what they do care about and are trying to accomplish, and formulate a bullet-proof business plan to sell the idea.

Case Studies as Proof-of-Concept

Case studies serve a few important purposes: they help explain the outcomes and benefits of SEO projects, they prove that you have the chops to get results, and they prove the concept using someone else’s money first, which reduces the perceived risk for your client.

In my experience and in the survey results, case studies come up time and again as the leading way to get client buy-in. Ideally you would use case studies that are your own, very clearly relevant to the project at hand, and created for a client that is similar in nature (like B2B vs. B2C, in a similar vertical, or facing a similar problem).

Even if you don’t have your own case studies to show, do your due diligence and find real examples other companies and practitioners have published. As an added bonus, the results of these case studies can help you forecast the potential high/medium/low impact of your work.

Image <a href="https://www.ampproject.org/case-studie
Source: https://moz.com/blog/earn-trust-get-search-work-implemented

WhatsApp now marks forwarded messages to curb the spread of deadly misinformation

WhatsApp just introduced a new feature designed to help its users identify the origin of information that they receive in the messaging app. For the first time, a forwarded WhatsApp message will include an indicator that marks it as forwarded. It’s a small shift for the messaging platform, but potentially one that could make a big difference in the way people transmit information, especially dubious viral content, over the app.

The newest version of WhatsApp includes the feature, which marks forwarded messages in subtle but still hard to miss italicized text above the content of a message.

The forwarded message designation is meant as a measure to control the spread of viral misinformation in countries like India, where the company has 200 million users. Misinformation spread through the app has been linked to the mob killing of multiple men who were targeted by false rumors accusing them of kidnapping children. Those rumors are believed to have spread through Facebook and WhatsApp.

Last week, India’s Information Technology Ministry issued a warning to WhatsApp specifically:

“Instances of lynching of innocent people have been noticed recently because of large number of irresponsible and explosive messages filled with rumours and provocation are being circulated on WhatsApp. The unfortunate killing in many states such as Assam, Maharashtra, Karnataka, Tripura and west Bengals are deeply painful and regretable.

While the Law and order machinery is taking steps to apprehend the culprits, the abuse of platform like WhatsApp for repeated circulation of such provocative content are equally a matter of deep concern. The Ministry of Electronics and Information Technology has taken serious note of these irresponsible messages and their circulation in such platforms. Deep disapproval of such developments has been conveyed to the senior management of the WhatsApp and they have been advised that necessary remedial measures should be taken to prevent  proliferation of  these  fake  and at times motivated/sensational messages. The Government has also directed that spread of such messages should be immediately contained through the application of appropriate technology.

It has also been pointed out that such platform cannot evade accountability and responsibility specially when good technological inventions are abused by some miscreants who resort to provocative messages which lead to spread of violence.

The Government has also conveyed in no uncertain terms that WhatsApp must take immediate action to end this menace and ensure that their platform is not used for such malafide activities.”

In a blog post accompanying the new message feature, WhatsApp encouraged its users to stop and think before sharing a forwarded message.


Source: https://techcrunch.com/2018/07/10/whatsapp-forwarded-messages-india/

Facebook is testing augmented reality ads in the news feed

Facebook is giving advertisers new ways to show off their products, including with augmented reality.

At its F8 developer conference earlier this year, Facebook announced that it was working with businesses to use AR to show off products in Messenger. Now a similar experience will start appearing in the News Feed, with a select group of advertisers testing out AR ads.

Ty Ahmad-Taylor, vice president of product marketing for Facebook’s global marketing solutions, showed off ads that incorporated his face into Candy Crush gameplay footage, and other ads that allowed shoppers to see how virtual sunglasses and makeup would look on their own faces.

“People traditionally have to go into stores to do this,” Ahmad-Taylor said. “People still really love that experience, but they would like to try it at home” — so this “bridges the gap.”

These ads look like normal in-feed ads at first, but they include a “Tap to try it on” option, which opens up the AR capabilities. And of course if you like the way it looks in AR, you can go ahead and buy the product.

Facebook says Michael Kors was the first brand to test out AR ads in the News Feed, with Sephora, NYX Professional Makeup, Bobbi Brown, Pottery Barn, Wayfair and King planning their own tests for later this summer.

Ahmad-Taylor made the announcement this morning at a New York City event for journalists and marketers highlighting Facebook’s advertising plans for the holidays.

In addition, he announced a new Video Creation Kit, which will allow advertisers to incorporate existing images into templates for mobile video ads. According to weight loss company Noom, which has been testing out these tools, the resulting videos performed 77 percent better than the static images.

Lastly, Facebook says it will continue to expand its support for shopping in Instagram Stories. It made shopping tags available to select brands in Stories last month, and for the holidays, it plans to roll that out to all brands that have enabled shopping in Instagram. It’s also making its collections ad format available to all advertisers.


Source: https://techcrunch.com/2018/07/10/facebook-ar-ads/

Writing Content That Is Too In-Depth Is Like Throwing Money Out the Window

money trash

You’ve heard people telling you that you need to write in-depth content because that’s what Google wants.

And it’s true… the average page that ranks on page 1 of Google contains 1,890 words.

word count

But you already know that.

The question is, should you be writing 2,000-word articles? 5,000? Or maybe even go crazy and create ultimate guides that are 30,000 words?

What’s funny is, I have done it all.

I’ve even tested out adding custom images and illustrations to these in-depth articles to see if that helps.

And of course, I tested if having one super long page with tens of thousands of words or having multiple pages with 4,000 or 5,000 words is better.

So, what do you think? How in-depth should your content be?

Well, let’s first look at my first marketing blog, Quick Sprout.

Short articles don’t rank well

With Quick Sprout, it started off just like any normal blog.

I would write 500 to 1,000-word blog posts and Google loved me.

Just look at my traffic during January 2011.

quicksprout 2011

As you can see, I had a whopping 67,038 unique visitors. That’s not too bad.

Even with the content being short, it did fairly well on Google over the years.

But over time, more marketing blogs started to pop up, competition increased, and I had no choice but to write more detailed content.

I started writing posts that were anywhere from 1,000 to a few thousand words. When I started to do that, I was able to rapidly grow my traffic from 67,038 to 115,759 in one year.

quicksprout 2012

That’s a 72.67% increase in traffic in just 1 year.

It was one of my best years, and all I had to do was write longer content.

So naturally, I kept up with the trend and continually focused on longer content.

But as the competition kept increasing, my traffic started to stagnate, even though I was producing in-depth content.

Here are my traffic stats for November 2012 on Quick Sprout.

quicksprout 2012

I understand that Thanksgiving takes place in November, hence traffic wasn’t as high as it could be. But still, there really wasn’t any growth from January to November of 2012.

In other words, writing in-depth content that was a few thousand words max wasn’t working out.

So what next?

Well, my traffic had plateaued. I had to figure something else out.

Writing longer, more in-depth content had helped me before… so I thought, why not try the 10x formula.

I decided to create content 10 times longer, better, and more in-depth than everyone else. I was going to the extreme because I knew it would reduce the chance of others copying me.

Plus, I was hoping that you would love it as a reader.

So, on January 24, 2013, I released my first in-depth guide.

It was called The Advanced Guide to SEO.

advanced guide to seo

It was so in-depth that it could have been a book.

Literally!

Heck, some say it was even better than a book as I paid someone for custom illustration work.

Now let’s look at the traffic stats for January 2013 when I published the guide.

quicksprout 2013

As you can see my traffic really started to climb again.

I went from 112,681 visitors in November to 244,923 visitors in January. Within 2 months I grew my traffic by 117%.

That’s crazy!!!!

The only difference: I was creating content that was so in-depth that no one else dared to copy to me (at that time).

Sure, some tried and a few were able to create some great content, but it wasn’t like hundreds of competing in-depth guides were coming out each year. Not even close!

Now, when I published the guide I broke it down into multiple chapters like a book because when I tested out making it one long page, it loaded so slow that the user experience was terrible.

Nonetheless, the strategy was effective.

So what did I do next?

I created 12 in-depth guides

I partnered up with other marketers and created over 280,000 words of marketing content. I picked every major subject… from online marketing to landing pages to growth hacking.

I did whatever I could to generate the most traffic within the digital marketing space.

It took a lot of time and money to create all 12 of these guides, but it was worth it.

By January of 2014, my traffic had reached all-time highs.

quicksprout 2014

I was generating 378,434 visitors a month. That’s a lot for a personal blog on marketing.

Heck, that’s a lot for any blog.

In other words, writing 10x content that was super in-depth worked really well. Even when I stopped producing guides, my traffic, continually rose.

Here’s my traffic in January 2015:

quicksprout 2015

And here’s January 2016 for Quick Sprout:

quicksprout 2016

But over time something happened. My traffic didn’t keep growing. And it didn’t stay flat either… it started to drop.

quicksprout 2017

In 2017, my traffic dropped for the first time.

It went from 518,068 monthly visitors to 451,485. It wasn’t a huge drop, but it was a drop.

And in 2018 my traffic dropped even more:

quicksprout 2018

I saw a huge drop in 2018. Traffic went down to just 297,251 monthly visitors.

And sure, part of that is because I shifted my focus to NeilPatel.com, which has become the main place I blog now.

But it’s largely that I learned something new when building up NeilPatel.com.

Longer isn’t always better

Similar to Quick Sprout, I have in-depth guides on NeilPatel.com.

I have guides on online marketing, SEO, Google ads, Facebook ads, and the list goes on and on.

If you happened to click on any of the guides above you’ll notice that they are drastically different than the ones on Quick Sprout.

Here are the main differences:

  • No fancy design – I found with the Quick Sprout experience, people love the fancy designs, but over time content gets old and outdated. To update content when there are so many custom illustrations is tough, which means you probably won’t update it as often as you should. This causes traffic to go down over time because people want to read up-to-date and relevant information.
  • Shorter and to the point – I’ve found that you don’t need super in-depth content. The guides on NeilPatel.com rank in similar positions on Google and cap out at around 10,000 words. They are still in-depth, but I found that after 10,000 or so words there are diminishing returns.

Now let’s look at the stats.

Here’s the traffic to the advanced SEO guide on Quick Sprout over the last 30 days:

quicksprout seo guide

Over 7,842 unique pageviews. There are tons of chapters and as you can see people are going through all of them.

And now let’s look at the NeilPatel.com SEO guide:

neil patel seo guide

I spent a lot less time, energy, and money creating the guide on NeilPatel.com, yet it receives 17,442 unique pageviews per month, which is more than the Quick Sprout guide. That’s a 122% difference!

But how is that possible?

I know what you are thinking. Google wants people to create higher quality content that benefits people.

So how is it that the NeilPatel.com one ranks higher.

Is it because of backlinks?

Well, the guide on Quick Sprout has 850 referring domains:

links quicksprout

And the NeilPatel.com has 831 referring domains:

links neil patel

Plus, they have similar URL ratings and domain ratings according to Ahrefs so that can’t be it.

So, what gives?

Google is a machine. It doesn’t think with emotions, it uses logic. While we as a user look at the guide on Quick Sprout and think that it looks better and is more in-depth, Google focuses on the facts.

See, Google doesn’t determine if one article is better than another by asking people for their opinion. Instead, they look at the data.

For example, they can look at the following metrics:

  • Time on site – which content piece has a better time on site?
  • Bounce rate – which content piece has the lowest bounce rate?
  • Back button – does the article solve all of the visitors’ questions and concerns? So much so they visitor doesn’t have to hit the back button and go back to Google to find another web page?

And those are just a few things that Google looks at from their 200+ ranking factors.

Because of this, I took a different approach to NeilPatel.com, which is why my traffic has continually gone up over time.

Instead of using opinion and spending tons of energy creating content that I think is amazing, I decided to let Google guide me.

With NeilPatel.com, my articles range from 2,000 to 3,000 words. I’ve tried articles with 5,000+ words, but there is no guarantee that the more in-depth content will generate more traffic or that users will love it.

Now to clarify, I’m not trying to be lazy.

Instead, I’m trying to create amazing content while being short and to the point. I want to be efficient with both my time and your time while still delivering immense value.

Here’s the process I use to ensure I am not writing tons of content that people don’t want to read.

Be data driven

Because there is no guarantee that an article or blog post will do well, I focus on writing amazing content that is 2,000 to 3,000-words long.

I stick within that region because it is short enough where you will read it and long enough that I can go in-depth enough to provide value.

Once I release a handful of articles, I then look to see which ones you prefer based on social shares and search traffic.

Now that I have a list of articles that are doing somewhat well, I log into Google Search Console and find those URLs.

You can find a list of URLs within Google Search Console by clicking on “Search Traffic” and then “Search Analytics”.

You’ll see a screen load that looks something like this:

search console queries

From there you’ll want to click on the “pages” button. You should be looking at a screen that looks similar to this:

search console pages

Find the pages that are gaining traction based on total search traffic and social shares and then click on them (you can input URLs into Shared Count to find out social sharing data).

Once you click on the URL, you’ll want to select the “Queries” icon to see which search terms people are finding that article from.

page queries

Now go back to your article and make it more in-depth.

And when I say in-depth, I am not talking about word count like I used to focus on at Quick Sprout.

Instead, I am talking depth… did the article cover everything that the user was looking for?

If you can cover everything in 3,000 words then you are good. If not, you’ll have to make it longer.

The way you do this is by seeing which search queries people are using to find your articles (like in the screenshot above). Keep in mind that people aren’t searching Google in a deliberate effort to land on your site… people use Google because they are looking for a solution to their problem.

Think of those queries that Google Search Console is showing you as “questions” people have.

If your article is in-depth enough to answer all of those questions, then you have done a good job.

If not, you’ll have to go more in-depth.

In essence, you are adding more words to your article, but you aren’t adding fluff.

You’re not keyword stuffing either. You are simply making sure to cover all aspects of the subject within your article.

This is how you write in-depth articles and not waste your time (or money) on word count.

And that’s how I grew NeilPatel.com without writing too many unnecessary words.

Conclusion

If you are writing 10,000-word articles you are wasting your time. Heck, even articles over 5,000 words could be wasting your time if you are only going after as many words as possible and adding tons of fluff along the way.

You don’t know what people want to read. You’re just taking a guess.

The best approach is to write content that is amazing and within the 2,000 word to 3,000-word range.

Once you publish the content, give it a few months and then look at search traffic as well as social sharing data to see what people love.

Take those articles and invest more resources into making them better and ultimately more in-depth (in terms of quality and information, not word count).

The last thing you want to do is write in-depth articles on subjects that very few people care about.

Just look at the Advanced Guide to SEO on Quick Sprout… I made an obvious mistake. I made it super in-depth on “advanced SEO”. But when you search Google for the term “SEO” and you scroll to the bottom to see related queries you see this…

seo related

People are looking for the basics of SEO, not advanced SEO information.

If I wrote a 2,000-word blog post instead of a 20,000-word guide, I could have caught this early on and adapted the article more to what people want versus what I thought they wanted.

That’s a major difference.

So how in-depth are you going to make your content?

The post Writing Content That Is Too In-Depth Is Like Throwing Money Out the Window appeared first on Neil Patel.


Source: https://neilpatel.com/blog/in-depth-content-tips/