Three-quarters of Americans lack confidence in tech companies’ ability to fight election interference

A significant majority of Americans have lost faith in tech companies’ ability to prevent the misuse of their platforms to influence the 2020 presidential election, according to a new study from Pew Research Center, released today. The study found that nearly three-quarters of Americans (74%) don’t believe platforms like Facebook, Twitter and Google will be able to prevent election interference. What’s more, this sentiment is felt by both political parties evenly.

Pew says that nearly identical shares of Republicans and Republican-leaning independents (76%) and Democrats and Democrat-leaning independents (74%) have little or no confidence in technology companies’ ability to prevent their platforms’ misuse with regard to election interference.

And yet, 78% of Americans believe it’s tech companies’ job to do so. Slightly more Democrats (81%) took this position, compared with Republicans (75%).

While Americans had similar negative feelings about platforms’ misuse ahead of the 2018 midterm elections, their lack of confidence has gotten even worse over the past year. As of January 2020, 74% of Americans report having little confidence in the tech companies, compared with 66% back in September 2018. For Democrats, the decline in trust is even greater, with 74% today feeling “not too” confident or “not at all” confident, compared with 62% in September 2018. Republican sentiment has declined somewhat during this same time, as well, with 72% expressing a lack of confidence in 2018, compared with 76% today.

Even among those who believe the tech companies are capable of handling election interference, very few (5%) of Americans feel “very” confident in their capabilities. Most of the optimists see the challenge as difficult and complex, with 20% saying they feel only “somewhat” confident.

Across age groups, both the lack of confidence in tech companies and a desire for accountability increase with age. For example, 31% of those 18 to 29 feel at least somewhat confident in tech companies’ abilities, versus just 20% of those 65 and older. Similarly, 74% of youngest adults believe the companies should be responsible for platform misuse, compared with 88% of the 65-and-up crowd.

Given the increased negativity felt across the board on both sides of the aisle, it would have been interesting to see Pew update its 2018 survey that looked at other areas of concern Republicans and Democrats had with tech platforms. The older study found that Republicans were more likely to feel social media platforms favored liberal views while Democrats were more heavily in favor of regulation and restricting false information.

Issues around election interference aren’t just limited to the U.S., of course. But news of Russia’s meddling in U.S. politics in particular — which involved every major social media platform — has helped to shape Americans’ poor opinion of tech companies and their ability to prevent misuse. The problem continues today, as Russia is being called out again for trying to intervene in the 2020 elections, according to several reports. At present, Russia’s focus is on aiding Sen. Bernie Sanders’ campaign in order to interfere with the Democratic primary, the reports said.

Meanwhile, many of the same vulnerabilities that Russia exploited during the 2016 elections remain, including the platforms’ ability to quickly spread fake news, for example. Russia is also working around blocks the tech companies have erected in an attempt to keep Russian meddling at bay. One report from The NYT said Russian hackers and trolls were now better at covering their tracks and were even paying Americans to set up Facebook pages to get around Facebook’s ban on foreigners buying political ads.

Pew’s report doesn’t get into any details as to why Americans have lost so much trust in tech companies since the last election, but it’s likely more than just the fallout from election interference alone. Five years ago, tech companies were viewed largely as having a positive impact on the U.S., Pew had once reported. But Americans no longer feel as they did, and now only around half of U.S. adults believe the companies are having a positive impact.

More Americans are becoming aware of how easily these massive platforms can be exploited and how serious the ramifications of those exploits have become across a number of areas, including personal privacy. It’s not surprising then that user sentiment around how well tech companies are capable of preventing election interference has declined, too, along with all the rest.


Source: https://techcrunch.com/2020/02/25/three-quarters-of-americans-lack-confidence-in-tech-companies-ability-to-fight-election-interference/

Facebook’s latest ‘transparency’ tool doesn’t offer much — so we went digging

Just under a month ago Facebook switched on global availability of a tool which affords users a glimpse into the murky world of tracking that its business relies upon to profile users of the wider web for ad targeting purposes.

Facebook is not going boldly into transparent daylight — but rather offering what privacy rights advocacy group Privacy International has dubbed “a tiny sticking plaster on a much wider problem”.

The problem it’s referring to is the lack of active and informed consent for mass surveillance of Internet users via background tracking technologies embedded into apps and websites, including as people browse outside Facebook’s own content garden.

The dominant social platform is also only offering this feature in the wake of the 2018 Cambridge Analytica data misuse scandal, when Mark Zuckerberg faced awkward questions in Congress about the extent of Facebook’s general web tracking. Since then policymakers around the world have dialled up scrutiny of how its business operates — and realized there’s a troubling lack of transparency in and around adtech generally and Facebook specifically

Facebook’s tracking pixels and social plugins — aka the share/like buttons that pepper the mainstream web — have created a vast tracking infrastructure which silently informs the tech giant of Internet users’ activity, even when a person hasn’t interacted with any Facebook-branded buttons.

Facebook claims this is just ‘how the web works’. And other tech giants are similarly engaged in tracking Internet users (notably Google). But as a platform with 2.2BN+ users Facebook has got a march on the lion’s share of rivals when it comes to harvesting people’s data and building out a global database of person profiles.

It’s also positioned as a dominant player in an adtech ecosystem which means it’s the one being fed with intel by data brokers and publishers who deploy tracking tech to try to survive in such a skewed system.

Meanwhile the opacity of online tracking means the average Internet user is none the wiser that Facebook can be following what they’re browsing all over the Internet. Questions of consent loom very large indeed.

Facebook is also able to track people’s usage of third party apps if a person chooses a Facebook login option which the company encourages developers to implement in their apps — again the carrot being to be able to offer a lower friction choice vs requiring users create yet another login credential.

The price for this ‘convenience’ is data and user privacy as the Facebook login gives the tech giant a window into third part app usage.

The company has also used a VPN app it bought and badged as a security tool to glean data on third party app usage — though it’s recently stepped back from the Onavo app after a public backlash (though that did not stop it running a similar tracking program targeted at teens).

Background tracking is how Facebook’s creepy ads function (it prefers to call such behaviorally targeted ads ‘relevant’) — and how they have functioned for years

Yet it’s only in recent months that it’s offered users a glimpse into this network of online informers — by providing limited information about the entities that are passing tracking data to Facebook, as well as some limited controls.

From ‘Clear History’ to “Off-Facebook Activity”

Originally briefed in May 2018, at the crux of the Cambridge Analytica scandal, as a ‘Clear History’ option this has since been renamed ‘Off-Facebook Activity’ — a label so bloodless and devoid of ‘call to action’ that the average Facebook user, should they stumble upon it buried deep in unlovely settings menus, would more likely move along than feel moved to carry out a privacy purge.

(For the record you can access the setting here — but you do need to be logged into Facebook to do so.)

The other problem is that Facebook’s tool doesn’t actually let you purge your browsing history, it just delinks it from being associated with your Facebook ID. There is no option to actually clear your browsing history via its button. Another reason for the name switch. So, no, Facebook hasn’t built a clear history ‘button’.

“While we welcome the effort to offer more transparency to users by showing the companies from which Facebook is receiving personal data, the tool offers little way for users to take any action,” said Privacy International this week, criticizing Facebook for “not telling you everything”.

As the saying goes, a little knowledge can be a dangerous thing. So a little transparency implies — well — anything but clarity. And Privacy International sums up the Off-Facebook Activity tool with an apt oxymoron — describing it as “a new window to the opacity”.

“This tool illustrates just how impossible it is for users to prevent external data from being shared with Facebook,” it writes, warning with emphasis: “Without meaningful information about what data is collected and shared, and what are the ways for the user to opt-out from such collection, Off-Facebook activity is just another incomplete glimpse into Facebook’s opaque practices when it comes to tracking users and consolidating their profiles.”

It points out, for instance, that the information provided here is limited to a “simple name” — thereby preventing the user from “exercising their right to seek more information about how this data was collected”, which EU users at least are entitled to.

“As users we are entitled to know the name/contact details of companies that claim to have interacted with us. If the only thing we see, for example, is the random name of an artist we’ve never heard before (true story), how are we supposed to know whether it is their record label, agent, marketing company or even them personally targeting us with ads?” it adds.

Another criticism is Facebook is only providing limited information about each data transfer — with Privacy International noting some events are marked “under a cryptic CUSTOM” label; and that Facebook provides “no information regarding how the data was collected by the advertiser (Facebook SDK, tracking pixel, like button…) and on what device, leaving users in the dark regarding the circumstances under which this data collection took place”.

“Does Facebook really display everything they process/store about those events in the log/export?” queries privacy researcher Wolfie Christl, who tracks the adtech industry’s tracking techniques. “They have to, because otherwise they don’t fulfil their SAR [Subject Access Request] obligations [under EU law].”

Christl notes Facebook makes users jump through an additional “download” hoop in order to view data on tracked events — and even then, as Privacy International points out, it gives up only a limited view of what has actually been tracked…

https://platform.twitter.com/widgets.js

“For example, why doesn’t Facebook list the specific sites/URLs visited? Do they infer data from the domains e.g. categories? If yes, why is this not in the logs?” Christl asks.

We reached out to Facebook with a number of questions, including why it doesn’t provide more detail by default. It responded with this statement attributed to spokesperson:

We offer a variety of tools to help people access their Facebook information, and we’ve designed these tools to comply with relevant laws, including GDPR. We disagree with this [Privacy International] article’s claims and would welcome the chance to discuss them with Privacy International.

Facebook also said it’s continuing to develop which information it surfaces through the Off-Facebook Activity tool — and said it welcomes feedback on this.

We also asked it about the legal bases it uses to process people’s information that’s been obtained via its tracking pixels and social plug-ins. It did not provide a response to those questions.

Six names, many questions…

When the company launched the Off-Facebook Activity tool a snap poll of available TechCrunch colleagues showed very diverse results for our respective tallies (which also may not show the most recent activity, per other Facebook caveats) — ranging from one colleague who had an eye-watering 1,117 entities (likely down to doing a lot of app testing); to several with several/a few hundred apiece; to a couple in the middle tens.

In my case I had just six. But from my point of view — as an EU citizen with a suite of rights related to privacy and data protection; and as someone who aims to practice good online privacy hygiene, including having a very locked down approach to using Facebook (never using its mobile app for instance) — it was still six too many. I wanted to find out how these entities had circumvented my attempts not to be tracked.

And in the case of the first one in the list who on earth it was…

Turns out cloudfront is an Amazon Web Services Content Delivery Network subdomain. But I had to go searching online myself to figure out that the owner of that particular domain is (now) a company called Nativo.

Facebook’s list provided only very bare bones information. I also clicked to delink the first entity, since it immediately looked so weird, and found that by doing that Facebook wiped all the entries — which meant I was unable to retain access to what little additional info it had provided about the respective data transfers.

Undeterred I set out to contact each of the six companies directly with questions — asking what data of mine they had transferred to Facebook and what legal basis they thought they had for processing my information.

(On a practical level six names looked like a sample size I could at least try to follow up manually — but remember I was the TechCrunch exception; imagine trying to request data from 1,117 companies, or 450 or even 57, which were the lengths of lists of some of my colleagues.)

This process took about a month and a lot of back and forth/chasing up. It likely only yielded as much info as it did because I was asking as a journalist; an average Internet user may have had a tougher time getting attention on their questions — though, under EU law, citizens have a right to request a copy of personal data held on them.

Eventually, I was able to obtain confirmation that tracking pixels and Facebook share buttons had been involved in my data being passed to Facebook in certain instances. Even so I remain in the dark on many things. Such as exactly what personal data Facebook received.

In one case I was told by a listed company that it doesn’t know itself what data was shared — only Facebook knows because it’s implemented the company’s “proprietary code”. (Insert your own ‘WTAF’ there.)

The legal side of these transfers also remains highly opaque. From my point of view I would not intentionally consent to any of this tracking — but in some instances the entities involved claim that (my) consent was (somehow) obtained (or implied).

In other cases they said they are relying on a legal basis in EU law that’s referred to as ‘legitimate interests’. However this requires a balancing test to be carried out to ensure a business use does not have a disproportionate impact on individual rights.

I wasn’t able to ascertain whether such tests had ever been carried out.

Meanwhile, since Facebook is also making use of the tracking information from its pixels and social plug ins (and seemingly more granular use, since some entities claimed they only get aggregate not individual data), Christl suggests it’s unlikely such a balancing test would be easy to pass for that tiny little ‘platform giant’ reason.

Notably he points out Facebook’s Business Tool terms state that it makes use of so called “event data” to “personalize features and content and to improve and secure the Facebook products” — including for “ads and recommendations”; for R&D purposes; and “to maintain the integrity of and to improve the Facebook Company Products”.

In a section of its legal terms covering the use of its pixels and SDKs Facebook also puts the onus on the entities implementing its tracking technologies to gain consent from users prior to doing so in relevant jurisdictions that “require informed consent” for tracking cookies and similar — giving the example of the EU.

“You must ensure, in a verifiable manner, that an end user provides the necessary consent before you use Facebook Business Tools to enable us to store and access cookies or other information on the end user’s device,” Facebook writes, pointing users of its tools to its Cookie Consent Guide for Sites and Apps for “suggestions on implementing consent mechanisms”.

Christl flags the contradiction between Facebook claiming users of its tracking tech needing to gain prior consent vs claims I was given by some of these entities that they don’t because they’re relying on ‘legitimate interests’.

“Using LI as a legal basis is even controversial if you use a data analytics company that reliably processes personal data strictly on behalf of you,” he argues. “I guess, industry lawyers try to argue for a broader applicability of LI, but in the case of FB business tools I don’t believe that the balancing test (a businesses legitimate interests vs. the impact on the rights and freedoms of data subjects) will work in favor of LI.”

Those entities relying on legitimate interests as a legal base for tracking would still need to offer a mechanism where users can object to the processing — and I couldn’t immediately see such a mechanism in the cases in question.

One thing is crystal clear: Facebook itself does not provide a mechanism for users to object to its processing of tracking data nor opt out of targeted ads. That remains a long-standing complaint against its business in the EU which data protection regulators are still investigating.

One more thing: Non-Facebook users continue to have no way of learning what data of theirs is being tracked and transferred to Facebook. Only Facebook users have access to the Off-Facebook Activity tool, for example. Non-users can’t even access a list.

Facebook has defended its practice of tracking non-users around the Internet as necessary for unspecified ‘security purposes’. It’s an inherently disproportionate argument of course. The practice also remains under legal challenge in the EU.

Tracking the trackers

SimpleReach (aka d8rk54i4mohrb.cloudfront.net)

What is it? A California-based analytics platform (now owned by Nativo) used by publishers and content marketers to measure how well their content/native ads performs on social media. The product began life in the early noughties as a simple tool for publishers to recommend similar content at the bottom of articles before the startup pivoted — aiming to become ‘the PageRank of social’ — offering analytics tools for publishers to track engagement around content in real-time across the social web (plugging into platform APIs). It also built statistical models to predict which pieces of content will be the most social and where, generating a proprietary per article score. SimpleReach was acquired by Nativo last year to complement analytics tools the latter already offered for tracking content on the publisher/brand’s own site.

Why did it appear in your Off-Facebook Activity list? Given it’s a b2b product it does not have a visible consumer brand of its own. And, to my knowledge, I have never visited its own website prior to investigating why it appeared in my Off-Facebook Activity list. Clearly, though, I must have visited a site (or sites) that are using its tracking/analytics tools. Of course an Internet user has no obvious way to know this — unless they’re actively using tools to monitor which trackers are tracking them.

In a further quirk, neither the SimpleReach (nor Nativo) brand names appeared in my Off-Facebook Activity list. Rather a domain name was listed — d8rk54i4mohrb.cloudfront.net — which looked at first glance weird/alarming.

I found this is owned by SimpleReach by using a tracker analytics service.

Once I knew the name I was able to connect the entry to Nativo — via news reports of the acquisition — which led me to an entity I could direct questions to.  

What happened when you asked them about this? There was a bit of back and forth and then they sent a detailed response
Source: https://techcrunch.com/2020/02/25/facebooks-latest-transparency-tool-doesnt-offer-much-so-we-went-digging/

Games already are social networks

Video games are only getting more popular.

Roughly 2.5 billion people around the world played games last year, double the number of players in 2013. Gaming is a $149 billion industry, growing 7% year over year, with the U.S. as its largest market. In America, the average gamer is 33 years old and 46% of gamers are female, according to the Entertainment Software Association.

Per Quartz reporter Dan Kopf’s summary of U.S. Department of Labor data:

More people now report playing games on a typical day — 11.4% in 2017 compared to 7.8% in 2003 — and, on days they do play games, they spend more time doing so — about 145 minutes in 2017, compared to 125 in 2003.

Young people are the biggest driver of the trend. From 2003 to 2015, 15-24 year olds spent less than 25 minutes playing games on the average day. From 2015 to 2017, those in that age group dedicated almost 40 minutes a day to games.

Mobile games account for a large part of this dramatic growth, but all major game categories are growing. The console gaming market — the oldest segment and most expensive due to hardware cost — expanded more than 7% last year alone.


Source: https://techcrunch.com/2020/02/25/games-already-are-social-networks/

A multiverse, not the metaverse

Following web forums, web platforms and mobile apps, we are entering a new stage of social media — the multiverse era — where the virtual worlds of games expand to become mainstream hubs for social interaction and entertainment. In a seven-part Extra Crunch series, we will explore why that is the case and which challenges and opportunities are making it happen.

In 10 years, we will have undergone a paradigm shift in social media and human-computer interaction, moving away from 2D apps centered on posting content toward shared feeds and an era where mixed reality (viewed with lightweight headsets) mixes virtual and physical worlds. But we’re not technologically or culturally ready for that future yet. The “metaverse” of science fiction is not arriving imminently.

Instead, the virtual worlds of multiplayer games — still accessed from phones, tablets, PCs and consoles — are our stepping stones during this next phase.

Understanding this gradual transition helps us reconcile the futuristic visions of many in tech with the reality of how most humans will participate in virtual worlds and how social media impacts society. This transition centers on the merging of gaming and social media and leads to a new model of virtual worlds that are directly connected with our physical world, instead of isolated from it.

Multiverse virtual worlds will come to function almost like new countries in our society, countries that exist in cyberspace rather than physical locations but have complex economic and political systems that interact with the physical world.

Throughout these posts, I make a distinction between the “physical,” “virtual,” and “real” worlds. Our physical world defines tangible existence like in-person interactions and geographic location. The virtual world is that of digital technology and cyberspace: websites, social media, games. The real world is defined by the norms of what we accept as normal and meaningful in society. Laws and finance aren’t physical, but they are universally accepted as concrete aspects of life. I’ll argue here that social media apps are virtual worlds we have accepted as real — unified with normal life rather than separate from it — and that multiverse virtual worlds will make the same crossover.

In fact, because they incentivize small group interactions and accomplishment of collaborative tasks rather than promotion of viral posts, multiverse virtual worlds will bring a healthier era for social media’s societal impact.

The popularity of massive multiplayer online (MMO) gaming is exploding at the same time that the technology to access persistent virtual worlds with high-quality graphics from nearly any device is hitting the market. The rise of Epic Games’ Fortnite since 2017 accelerated interest in MMO games from both consumers who don’t consider themselves gamers and from journalists and investors who hadn’t paid much attention to gaming before.

In the decade ahead, people will come to socialize as much in virtual worlds that evolved from games as they will on platforms like Instagram, Twitter and TikTok. Building things with friends within virtual worlds will become common, and major events within the most popular virtual worlds will become pop culture news stories.

Right now, three-quarters of U.S.-based Facebook users interact with the site on a daily basis; Instagram (63%), Snapchat (61%), YouTube (51%) and Twitter (41%) have similarly penetrated the daily lives of Americans. By comparison, the percentage of people who play a game on any given day increased from just 8% in 2003 to 11% in 2016. Within the next few years, that number will multiply as the virtual worlds within games become more fulfilling social, entertainment and commercial platforms.

As I mentioned in my 2020 media predictions article, Facebook is readying itself for this future and VCs are funding numerous startups that are building toward it, like Klang Games, Darewise Entertainment and Singularity 6. Epic Games joins Roblox and Mojang (the company behind Minecraft) as among the best-positioned large gaming companies to seize this opportunity. Startups are already popping up to provide the middleware for virtual economies as they become larger and more complex, and a more intense wave of such startups will arrive over the next few years to provide that infrastructure as a service.

Over the next few years, there will be a trend: new open-world MMO games that emphasize social functionality that engages users, even if they don’t care much about the mission of the game itself. These new products will target casual gamers wanting to enter the world for merely a few minutes at a time since hardcore gamers are already well-served by game publishers.

Some of these more casual, socializing-oriented MMOs will gain widespread popularity, the economy within and around them will soar and the original gaming scenario that provided a focus on what to do will diminish as content created by users becomes the main attraction.

Let’s explore the forces that underpin this transition. Here are the seven articles in this series:

  1. Games already are social networks
  2. Social apps already are lightweight virtual worlds
  3. What virtual worlds in this transition era look like
  4. Why didn’t this already happen?
  5. How virtual worlds could save society
  6. The rise of virtual economies and their merging with our “real” economy
  7. Competitive landscape of the multiverse


Source: https://techcrunch.com/2020/02/25/virtual-worlds-intro/

A Step-by-Step Guide to Growing Your SEO Traffic Using Ubersuggest

There are a lot of tools out there and a ton of SEO reports.

But when you use them, what happens?

You get lost, right?

Don’t worry, that’s normal (sadly). And maybe one day I will
be able to fix that.

But for now, the next best thing I can do is teach you how to grow your SEO traffic using Ubersuggest. This way, you know exactly what to do, even if you have never done any SEO.

Here we go…

Step #1: Create a project

Head over to the Ubersuggest dashboard and
register for a free account.

Once you do that, I want you to click on “Add Your First Project.”

Next, add your URL and the name of your website.

Then pick the main country or city that you do business in. If you are a national business, then type in the country you are in. If you are a local business, type in your city and click “Next.”

If you do business in multiple countries or cities, you can type them in one at a time and select each country or city.

Assuming you have your site connected to Google Search Console, you’ll see a list of keywords that you can automatically track on the left-hand side. Aside from tracking any of those, you can track others as well. Just type in the keywords you want to track in the box and hit the “Enter” key.

After hitting the “Next” button, you will be taken to your dashboard. It may take a minute but your dashboard will look something like this:

Click on the “Tracked Keywords” box and load your website profile.

What’s cool about this report is that you can see your rankings
over time both on mobile and desktop devices. This is important because Google
has a mobile index, which means your rankings are probably slightly different
on mobile devices than desktop.

If you want to see how you are ranking on Google’s mobile index, you just have to click the “Mobile” icon.

The report is self-explanatory. It shows your rankings over time for any keyword you are tracking. You can always add more keywords and even switch between locations.

For example, as of writing this blog post, I rank number 4 on desktop devices for the term “SEO” in the United States. In the United Kingdom, though, I rank number 16. Looks like I need to work on that.
Source: https://neilpatel.com/blog/ubersuggest-guide/

Are H1 Tags Necessary for Ranking? [SEO Experiment]

Posted by Cyrus-Shepard

In earlier days of search marketing, SEOs often heard the same two best practices repeated so many times it became implanted in our brains:

  1. Wrap the title of your page in H1 tags
  2. Use one — and only one — H1 tag per page

These suggestions appeared in audits, SEO tools, and was the source of constant head shaking. Conversations would go like this:

“Silly CNN. The headline on that page is an H2. That’s not right!”

“Sure, but is it hurting them?”

“No idea, actually.”

    Over time, SEOs started to abandon these ideas, and the strict concept of using a single H1 was replaced by “large text near the top of the page.”

    Google grew better at content analysis and understanding how the pieces of the page fit together. Given how often publishers make mistakes with HTML markup, it makes sense that they would try to figure it out for themselves.

    The question comes up so often, Google’s John Muller addressed it in a Webmaster Hangout:

    “You can use H1 tags as often as you want on a page. There’s no limit — neither upper nor lower bound.

    H1 elements are a great way to give more structure to a page so that users and search engines can understand which parts of a page are kind of under different headings, so I would use them in the proper way on a page.

    And especially with HTML5, having multiple H1 elements on a page is completely normal and kind of expected. So it’s not something that you need to worry about. And some SEO tools flag this as an issue and say like ‘oh you don’t have any H1 tag’ or ‘you have two H1 tags.’ From our point of view, that’s not a critical issue. From a usability point of view, maybe it makes sense to improve that. So, it’s not that I would completely ignore those suggestions, but I wouldn’t see it as a critical issue.

    Your site can do perfectly fine with no H1 tags or with five H1 tags.”

    Despite these assertions from one of Google’s most trusted authorities, many SEOs remained skeptical, wanting to “trust but verify” instead.

    So of course, we decided to test it… with science!

    Craig Bradford of Distilled noticed that the Moz Blog — this very one — used H2s for headlines instead of H1s (a quirk of our CMS).

    H2 Header
    h1 SEO Test Experiment

    We devised a 50/50 split test of our titles using the newly branded SearchPilot (formerly DistilledODN). Half of our blog titles would be changed to H1s, and half kept as H2. We would then measure any difference in organic traffic between the two groups.

    After eight weeks, the results were in:

    To the uninitiated, these charts can be a little hard to decipher. Rida Abidi of Distilled broke down the data for us like this:

    Change breakdown – inconclusive

    • Predicted uplift: 6.2% (est. 6,200 monthly organic sessions)
    • We are 95% confident that the monthly increase in organic sessions is between:
      • Top: 13,800
      • Bottom: -4,100

    The results of this test were inconclusive in terms of organic traffic, therefore we recommend rolling it back.

    Result: Changing our H2s to H1s made no statistically significant difference

    Confirming their statements, Google’s algorithms didn’t seem to care if we used H1s or H2s for our titles. Presumably, we’d see the same result if we used H3s, H4s, or no heading tags at all.

    It should be noted that our titles still:

    • Used a large font
    • Sat at the top of each article
    • Were unambiguous and likely easy for Google to figure out

    Does this settle the debate? Should SEOs throw caution to the wind and throw away all those H1 recommendations?

    No, not completely…

    Why you should still use H1s

    Despite the fact that Google seems to be able to figure out the vast majority of titles one way or another, there are several good reasons to keep using H1s as an SEO best practice.

    Georgy Nguyen made some excellent points in an article over at Search Engine Land, which I’ll try to summarize and add to here.

    1. H1s help accessibility

    Screen reading technology can use H1s to help users navigate your content, both in display and the ability to search.

    2. Google may use H1s in place of title tags

    In some rare instances — such as when Google can’t find or process your title tag — they may choose to extract a title from some other element of your page. Oftentimes, this can be an H1.

    3. Heading use is correlated with higher rankings

    Nearly every SEO correlation study we’ve ever seen has shown a small but positive correlation between higher rankings and the use of headings on a page, such as this most recent one from SEMrush, which looked at H2s and H3s.

    To be clear, there’s no evidence that headings in and of themselves are a Google ranking factor. But headings, like Structured Data, can provide context and meaning to a page.

    As John Mueller said on Twitter:

    What’s it all mean? While it’s a good idea to keep adhering to H1 “best practices” for a number of reasons, Google will more than likely figure things out — as our experiment showed — if you fail to follow strict H1 guidelines.

    Regardless, you should likely:

    1. Organize your content with hierarchical headings — ideally H1, H2s, H3s, etc.
    2. Use a large font headline at the top of your content. In other words, make it easy for Google, screen readers, and other machines or people reading your content to figure out the headline.
    3. If you have a CMS or technical limitations that prevent you from using strict H1s and SEO best practices, do your best and don’t sweat the small stuff.

    Real-world SEO — for better or worse — can be messy. Fortunately, it can also be flexible.

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


    Source: https://moz.com/blog/h1-seo-experiment

    Forensic Architecture redeploys surveillance-state tech to combat state-sponsored violence

    The specter of constant surveillance hangs over all of us in ways we don’t even fully understand, but it is also possible to turn the tools of the watchers against them. Forensic Architecture is exhibiting several long-term projects at the Museum of Art and Design in Miami that use the omnipresence of technology as a way to expose crimes and violence by oppressive states.

    Over seven years Eyal Weizman and his team have performed dozens of investigations into instances of state-sponsored violence, from drone strikes to police brutality. Often these events are minimized at all levels by the state actors involved, denied or no-commented until the media cycle moves on. But sometimes technology provides ways to prove a crime was committed and occasionally even cause the perpetrator to admit it — hoisted by their own electronic petard.

    Sometimes this is actual state-deployed kit, like body cameras or public records, but it also uses private information co-opted by state authorities to track individuals, like digital metadata from messages and location services.

    For instance, when Chicago police shot and killed Harith Augustus in 2018, the department released some footage of the incident, saying that it “speaks for itself.” But Forensic Architecture’s close inspection of the body cam footage and cross reference with other materials makes it obvious that the police violated numerous rules (including in the operation of the body cams) in their interaction with him, escalating the situation and ultimately killing a man who by all indications — except the official account — was attempting to comply. It also helped additional footage see the light which was either mistakenly or deliberately left out of a FOIA release.

    In another situation, a trio of Turkish migrants seeking asylum in Greece were shown, by analysis of their WhatsApp messages, images and location and time stamps, to have entered Greece and been detained by Greek authorities before being “pushed back” by unidentified masked escorts, having been afforded no legal recourse to asylum processes or the like. This is one example of several recently that appear to be private actors working in concert with the state to deprive people of their rights.

    Situated testimony for survivors

    I spoke with Weizman before the opening of this exhibition in Miami, where some of the latest investigations are being shown off. (Shortly after our interview he would be denied entry to the U.S. to attend the opening, with a border agent explaining that this denial was algorithmically determined; we’ll come back to this.)

    The original motive for creating Forensic Architecture, he explained, was to elicit testimony from those who had experienced state violence.

    “We started using this technique when in 2013 we met a drone survivor, a German woman who had survived a drone strike in Pakistan that killed several relatives of hers,” Weizman explained. “She has wanted to deliver testimony in a trial regarding the drone strike, but like many survivors her memory was affected by the trauma she has experienced. The memory of the event was scattered, it had lacunae and repetitions, as you often have with trauma. And her condition is like many who have to speak out in human rights work: The closer you get to the core of the testimony, the description of the event itself, the more it escapes you.”

    The approach they took to help this woman, and later many others, jog her own memory, was something called “situated testimony.” Essentially it amounts to exposing the person to media from the experience, allowing them to “situate” themselves in that moment. This is not without its own risks.

    “Of course you must have the appropriate trauma professionals present,” Weizman said. “We only bring people who are willing to participate and perform the experience of being again at the scene as it happened. Sometimes details that would not occur to someone to be important come out.”

    A digital reconstruction of a drone strike’s explosion was recreated physically for another exhibition.

    But it’s surprising how effective it can be, he explained. One case exposed American involvement hitherto undisclosed.

    “We were researching a Cameroon special forces detention center, torture and death in custody occurred, for Amnesty International,” he explained. “We asked detainees to describe to us simply what was outside the window. How many trees, or what else they could see.” Such testimony could help place their exact location and orientation in the building and lead to more evidence, such as cameras across the street facing that room.

    “And sitting in a room based on a satellite image of the area, one told us: ‘yes, there were two trees, and one was over by the fence where the American soldiers were jogging.’ We said, ‘wait, what, can you repeat that?’ They had been interviewed many times and never mentioned American soldiers,” Weizman recalled. “When we heard there were American personnel, we found Facebook posts from service personnel who were there, and were able to force the transfer of prisoners there to another prison.”

    Weizman noted that the organization only goes where help is requested, and does not pursue what might be called private injustices, as opposed to public.

    “We require an invitation, to be invited into this by communities that invite state violence. We’re not a forensic agency, we’re a counter-forensic agency. We only investigate crimes by state authorities.”

    Using virtual reality: “Unparalleled. It’s almost tactile.”

    In the latest of these investigations, being exhibited for the first time at MOAD, the team used virtual reality for the first time in their situated testimony work. While VR has proven to be somewhat less compelling than most would like on the entertainment front, it turns out to work quite well in this context.

    “We worked with an Israeli whistleblower soldier regarding testimony of violence he committed against Palestinians,” Weizman said. “It has been denied by the Israeli prime minister and others, but we have been able to find Palestinian witnesses to that case, and put them in VR so we could cross reference them. We had victim and perpetrator testifying to the same crime in the same space, and their testimonies can be overlaid on each other.”

    Dean Issacharoff — the soldier accused by Israel of giving false testimony — describes the moment he illegally beat a Palestinian civilian. (Caption and image courtesy of Forensic Architecture)

    One thing about VR is that the sense of space is very real; if the environment is built accurately, things like sight-lines and positional audio can be extremely true to life. If someone says they saw the event occur here, but the state says it was here, and a camera this far away saw it at this angle… these incomplete accounts can be added together to form something more factual, and assembled into a virtual environment.

    “That project is the first use of VR interviews we have done — it’s still in a very experimental stage. But it didn’t involve fatalities, so the level of trauma was a bit more controlled,” Weizman explained. “We have learned that the level and precision we can arrive at in reconstructing an incident is unparalleled. It’s almost tactile; you can walk through the space, you can see every object: guns, cars, civilians. And you can populate it until the witness is satisfied that this is what they experienced. I think this is a first, definitely in forensic terms, as far as uses of VR.”

    A photogrammetry-based reconstruction of the area of Hebron where the incident took place.

    In video of the situated testimony, you can see witnesses describing locations more exactly than they likely or even possibly could have without the virtual reconstruction. “I stood with the men at exactly that point,” says one, gesturing toward an object he recognized, then pointing upwards: “There were soldiers on the roof of this building, where the writing is.”

    Of course it is not the digital recreation itself that forces the hand of those involved, but the incontrovertible facts it exposes. No one would ever have known that the U.S. had a presence at that detainment facility, and the country had no reason to say it did. The testimony wouldn’t even have been enough, except that it put the investigators onto a line of inquiry that produced data. And in the case of the Israeli whistleblower, the situated testimony defies official accounts that the organization he represented had lied about the incident.

    Avoiding “product placement” and tech incursion

    Sophie Landres, MOAD’s curator of Public Programs and Education, was eager to add that the museum is not hosting this exhibit as a way to highlight how wonderful technology is. It’s important to put the technology and its uses in context rather than try to dazzle people with its capabilities. You may find yourself playing into someone else’s agenda that way.

    “For museum audiences, this might be one of their first encounters with VR deployed in this way. The companies that manufacture these technologies know that people will have their first experiences with this tech in a cultural or entertainment contrast, and they’re looking for us to put a friendly face on these technologies that have been created to enable war and surveillance capitalism,” she told me. “But we’re not interested in having our museum be a showcase for product placement without having a serious conversation about it. It’s a place where artists embrace new technologies, but also where they can turn it towards existing power structures.”

    Boots on backs mean this not an advertisement for VR headsets or 3D modeling tools.

    She cited a tongue-in-cheek definition of “mixed reality” referring to both digital crossover into the real world and the deliberate obfuscation of the truth at a greater scale.

    “On the one hand you have mixing the digital world and the real, and on the other you have the mixed reality of the media environment, where there’s no agreement on reality and all these misinformation campaigns. What’s important about Forensic Architecture is they’re not just presenting evidence of the facts, but also the process used to arrive at these truth claims, and that’s extremely important.”

    In openly presenting the means as well as the ends, Weizman and his team avoid succumbing to what he calls the “dark epistemology” of the present post-truth era.

    “The arbitrary logic of the border”

    As mentioned earlier, Weizman was denied entry to the U.S. for reasons unknown, but possibly related to the network of politically active people with whom he has associated for the sake of his work. Disturbingly, his wife and children were also stopped while entering the states a day before him and separated at the airport for questioning.

    In a statement issued publicly afterwards, Weizman dissected the event.

    In my interview the officer informed me that my authorization to travel had been revoked because the “algorithm” had identified a security threat. He said he did not know what had triggered the algorithm but suggested that it could be something I was involved in, people I am or was in contact with, places to which I had traveled… I was asked to supply the Embassy with additional information, including fifteen years of travel history, in particular where I had gone and who had paid for it. The officer said that Homeland Security’s investigators could assess my case more promptly if I supplied the names of anyone in my network whom I believed might have triggered the algorithm. I declined to provide this information.

    This much we know: we are being electronically monitored for a set of connections – the network of associations, people, places, calls, and transactions – that make up our lives. Such network analysis poses many problems, some of which are well known. Working in human rights means being in contact with vulnerable communities, activists and experts, and being entrusted with sensitive information. These networks are the lifeline of any investigative work. I am alarmed that relations among our colleagues, stakeholders, and staff are being targeted by the US government as security threats.

    This incident exemplifies – albeit in a far less intense manner and at a much less drastic scale – critical aspects of the “arbitrary logic of the border” that our exhibition seeks to expose. The racialized violations of the rights of migrants at the US southern border are of course much more serious and brutal than the procedural difficulties a UK national may experience, and these migrants have very limited avenues for accountability when contesting the violence of the US border.

    The works being exhibited, he said, “seek to demonstrate that we can invert the forensic gaze and turn it against the actors — police, militaries, secret services, border agencies — that usually seek to monopolize information. But in employing the counter-forensic gaze one is also exposed to higher-level monitoring by the very state agencies investigated.”

    Forensic Architecture’s investigations are ongoing; you can keep up with them at the organization’s website. And if you’re in Miami, drop by MOAD to see some of the work firsthand.


    Source: https://techcrunch.com/2020/02/24/forensic-architecture-redeploys-surveillance-state-tech-to-combat-state-sponsored-violence/

    Venmo prototypes a debit card for teenagers

    Allowance is going digital. Venmo has been spotted prototyping a new feature that would allow adult users to create a debit card connected to their account for their teenage children. That could potentially let parents set spending notifications and limits while giving kids more flexibility in urgent situations than a few dollars stuffed in a pocket.

    Delving into children’s banking could establish a new reason for adults to sign up for Venmo, get them saving more in Venmo debit accounts where the company can earn interest on the cash, and drive purchase frequency that racks up interchange fees for Venmo’s owner PayPal .

    But Venmo is arriving late to the teen debit card market. Startups like Greenlight and Step let parents manage teen spending on dedicated debit cards. More companies like Kard and neo banking giant Revolut have announced plans to launch their own versions. And Venmo’s prototype uses very similar terminology to that of Current, a frontrunner in the children’s banking space with over 500,000 accounts that raised a $20 million Series B late last year.

    The first signs of Venmo’s debit card were spotted by reverse engineering specialist Jane Manchun Wong who’s provided slews of accurate tips to TechCrunch in the past. Hidden in Venmo’s Android app is code revealing a “delegate card” feature, designed to let users create a debit card that’s connected to their account but has limited privileges.

    A screenshot generated from hidden code in Venmo’s app, via Jane Manchun Wong

    A set up screen Wong was able to generate from the code shows the option to “Enter your teen’s info”, because “We’ll use this to set up the debit card”. It asks parents to enter their child’s name, birthdate, and “What does your teen call you?” That’s almost identical to the “What does [your child’s name] call you?” set up screen for Current’s teen debit card.

    When TechCrunch asked about the teen debit feature and when it might launch, a Venmo spokesperson gave a cagey response that implies it’s indeed internally testing the option, writing “Venmo is constantly working to identify ways to refine and enhance the user experience. We frequently test product offerings to understand the value it could have for our users, and I don’t have anything further to share right now.”

    Typically, the tech company product development flow see them come up with ideas, mock them up, prototype them in their real apps as internal-only features, test them externally with small percentages of real users, and then launch them officially if feedback and data is positive throughout. It’s unclear when Venmo might launch teen debit cards, though the product could always be scrapped. It’d need to move fast to beat Revolut and Kard to market.

    Current’s teen debit card

    The launch would build upon the June 2018 launch of Venmo’s branded MasterCard debit card that’s monetized through interchange fees and interest on savings. It offers payment receipts with options to split charges with friends within Venmo, free withdrawls at MoneyPass ATMs, rewards, and in-app features for reseting your PIN or disabling a stolen card. Venmo also plans to launch a credit card issued by Synchrony this year.

    Venmo might look to equip its teen debit card with popular features from competitors, like automatic weekly allowance deposits, notifications of all purchases, or the ability to block spending at certain merchants. It’s unclear if it will charge a fee like the $36 per year subscription for Current.

    Current offers these features for parents who set up a teen debit card

    Tech startups are increasingly pushing to offer a broad range of financial services where margins are high. It’s an easy way to earn cheap money at a time when unit economics are coming under scrutiny in the wake of the WeWork implosion. Investors are pinning their hopes on efficient financial services too, pouring $34 billion into fintech startups during 2019.

    Venmo’s already become a popular way for younger people to split the bill for Uber rides or dinner. Bringing social banking to a teen demographic probably should have been its plan all along.


    Source: https://techcrunch.com/2020/02/24/venmo-teen-debit-card/

    Clearscope Review: Is This New SEO Tool Any Good?

    This is a SUPER in-depth review of Clearscope.io.

    In this up-to-date review I’ll break down:

    • What Clearscope does
    • What makes it unique
    • Key features
    • Things I like
    • Thing I don’t like
    • Whether or not it’s worth the price tag
    • Lots more

    Let’s get started.

    What Is Clearscope, Exactly?

    Clearscope.io is a keyword research and content optimization tool.

    Its bread and butter feature is called “Optimize”.

    This feature grades your content based on “content relevance and comprehensiveness”.

    In other words, the tool scans your content for key LSI keywords that Google considers closely-related to your target keyword.

    And Clearcope hands you a letter grade based on how well your content is optimized for SEO.

    Now:

    I’ll break down this feature in A LOT more depth later on. But I wanted to quickly show you Clearscope’s main thing before digging into the nitty gritty details.

    How Does Clearscope Work?

    Let me show you exactly how this SEO tool works using a real-life example.

    When you first login to Clearscope, you land on a dashboard page.

    This page is where you can run “reports”. And see a history of reports that you recently ran.

    So the first thing you’ll do with Clearscope is type in a keyword that you want to rank for.

    Then, hit “Run report”.

    And Clearscope will get to work by scanning the top 30 pages that already rank for that term.

    (This process usually takes about 2-3 minutes)

    When it’s done, you’ll see a “report” page that looks like this:

    And when you hit the green “Optimize” button, you’ll go to a page that allows you to copy and paste your content into Clearscope.

    Then, they’ll grade that content based on how many terms your content shares with the top 30 results.

    In the sidebar you get a list of LSI keywords that you should try to include in your content.

    Some of LSI terms will make perfect sense:

    Others… not so much.

    So yeah, that’s essentially how the tool works: you give it a keyword. It finds LSI keywords that you should include in your content. And it grades your content based on how many of those LSI keywords you already used.

    Now it’s time to go over the key Clearscope features.

    Grade Summary: Clearscope’s Keyword Overview

    The Grade summary is the first thing you see on a report page.

    This little box breaks down the current Google SERPs for that keyword based on a few different factors:

    • Content Grade: How comprehensive the content that ranks in positions 1-10 and 11-30 based on Clearscope’s grading system. A+ is the best. F- is the worst.
    • Word count: The average word count of the results. This is helpful because it gives you an idea of what Google users want to see. Do they want a quick, 200-word answer? Or an in-depth guide that’s 10k+ words? As far as I know, Clearscope.io is the only SEO tool on the market that has this feature. Which is surprising considering that it’s a relatively simple feature.
    • Readability: This shows you the Flesch Kincaid score for the top results. I tend to write in simple English anyway. So I don’t pay much attention to the readability.

    Keyword Search: Clearscope’s (Limited) Keyword Research Feature

    And if you hit the “Keyword search” button, you head over to Clearscope’s keyword research tool.

    As you can see, this is basically a list of keyword suggestions, CPCs and search volumes. Nothing fancy. And it doesn’t come close to the keyword features that you get in a tool like SEMrush.

    I’m 99% sure that they get this data from The Google Keyword Planner. Which, as you can see in this keyword tool analysis, has it’s pros and cons when it comes to generating keyword ideas.

    For example, their “Competition” score comes from Google Adwords.

    The “competition” score in the Google Keyword Planner has nothing to do with SEO. It’s NOT a keyword difficulty score.

    Instead, it’s a measure of how many people are bidding on that term. So it’s not helpful for keyword research. Honestly, I wish they didn’t even include this metric here. It’s confusing.

    That said, I don’t use Clearscope.io for keyword research. Just content optimization. But I did want to point out in this review that Clearscope does have a very basic keyword tool.

    Relevant Terms: Terms to Add to Your Content for SEO

    Relevant terms is a list of the terms that Clearscope found when it scraped the top 30 results.

    As you can see, they don’t just list out the terms. You also see how many times each term was used on each page:

    And a new feature that Clearscope recently rolled out was to let you know if that term shows up in a page’s header.

    Why is a keyword showing up in a header important?

    Well, Google puts more weight on terms that appear in header tags (like an H2). So if a keyword shows up randomly in the middle of a paragraph, that term may or may not be important.

    But if that same keyword is part of an H2 tag, it tells Google: “The terms in this H2 tag are an important part of this page”.

    For example, in my content marketing guide, chapter 1 is “Double Down On Video Content”.

    That heading is in an H2 tag. Which means that Google is going to see the term “Video Content” as important.

    Now:

    Even though seeing this list of related terms is helpful. It’s also redundant because you get the same list in Clearscope’s “Optimize” feature. Which I’ll cover in a minute.

    But first, I need to talk about their “Competitors” feature:

    Competitors: See How The Competition Stacks Up

    If you click on the “Competitors” tab, you get a list of the top 30 results all graded by Clearscope.

    If you’re going after competitive keywords, then expect to run into a lot of A’s here. Otherwise, you may get lucky and find a term where #1 piece of content is only C+. Which is very beatable.

    In most cases, you’ll notice that the letter grades get worse as you go to the second page of Google and beyond. Which shows that Clearscope’s Content Grade is legit.

    That said: I don’t even worry about the 11-30 results. According to an analysis that we recently rank, less than 1% of Google users go to the second page of the results.

    I kind of wish that there was a feature where you could ONLY look at LSI keywords that show up in the top 10 results. Or the top 3.

    But I understand that they want to get a wide range of potential terms that you can use in your content. Which is why they go all the way to the third page of Google’s results.

    Optimize: Make Your Content More Comprehensive

    “Optimize” is where you’ll spend most of your time inside of the Clearscope platform.

    So in this Clearscope review I want to show you exactly how it works.

    The first thing you’ll see is a page like this:

    You’ve got a giant field for your content. And the right sidebar lists out all of the terms that you should try to include in your content.

    (And they also show you terms you should try to include in your headers)

    You can either copy and paste your draft into Clearscope. Or write directly in the editor.

    If you do choose to write your content in Clearscope, note that it has an auto-save function, like Google Docs.
    Either way, Clearscope will scan your content against their list of important terms.

    Then, you’ll get a letter grade.

    And a list of terms that you did and didn’t include.

    Now:

    Just because Clearscope says that a keyword’s “typical use” is 50 times doesn’t mean that you need to use it 50 times.

    It’s just a way to get a rough idea of how many times a term is used compared to the others on the list.

    For example, you can see here that the term “Google” is used 8-24 times on average.

    But a term like “keyword research” is only used 2-4 times.

    This doesn’t mean that I need to use “Google” 24 times. And “keyword research” 4 times. But it does tell me that the term “Google’ IS more important than “keyword research”. And that I should try to use the term “Google” more than I use the term “keyword research”.

    So that’s how you use Clearscope. It’s basically a last step before you publish to make sure that your content is 100% optimized for SEO.

    Using Clearscope for Content Planning and Outlining

    Make no mistake:

    Clearscope.io is first and foremost an SEO tool.

    But I’ve actually been using it to help me plan and outline my content.

    And it’s sneaky good at making sure that your content doesn’t just include certain terms… but also covers key subtopics that Google wants to see.

    Here’s how:

    Let’s say that you wanted to write an article optimized around the keyword “SEO tutorial”.

    Well, you could just outline that post based on what you think people would want to read. But you can’t really be 100% sure.

    Enter: Clearscope.

    With Clearscope, you can use the headings that it suggests to actually create your post outline.

    For example, each of these headings would make GREAT sections for your article.

    In other words, you could use these suggestion headings to create an outline that looks something like this:

    • Search Engine Optimization Basics
    • Everything You Need to Know About Link Building
    • Keyword Research 1010
    • Meta Description Tips

    Etc.

    Even as someone that writes about SEO all day long, I find these heading suggestions helpful. But if you hire a freelance writer that’s new to the topic, this feature is HUGE.

    Instead of writing random stuff, your freelance writer basically has an outline handed to them. And outline that’s based on what’s already ranking for that keyword.

    Clearscope Support

    How does Clearscope’s support stack up?

    Well, I decided to run a little experiment to find out.

    First, I hit the “support” button in the site navigation.

    And I went directly to a simple contact form.

    No choosing an “issue category” or any BS like that. Just a simple form. Which I appreciate.

    And, as a bonus, they even filled in my name and email for me!

    This is a nice little touch that shows that they respect your time.

    So I asked them a question about their “heading” feature.

    In other words: I didn’t ask them something that their support team could answer with a template (like: “How do I change my password?”). Plus, I was generally curious about that detail.

    About 4 hours later, Kevin from their team sent me this:

    4 hours isn’t REALLY fast. But it’s not slow either. So they get a pass there.

    Overall, the response itself was solid. But not amazing either. For example, Kevin’s response could have used an example or a link to a knowledge base article. That way, he could be sure that I 100% understood his reply.

    But that’s a little bit nitpicky. Overall, I’d rate their support as a solid A-.

    My Results Using Clearscope

    I’ve been using (and paying for) Clearscope since September 2018.

    (That’s why I only pay $150/month. I got grandfathered in to their original pricing)

    So in this Clearscope review I did want to touch on my results from using the tool for almost two years now.

    So to the answer to the big question that you’re probably thinking about right now:

    Does Clearscope help you get higher rankings in Google?

    Overall, I can’t draw a straight line from using Clearscope to higher rankings. That’s because I use it to optimize pretty much all of our content at Backlinko.

    In other words: I haven’t run a controlled test of content that was optimized with Clearscope vs. content that wasn’t optimized with Clearscope.

    So I can’t give you a specific number like: “Clearscope has helped us get 18.4% more organic traffic”.

    But I can say that, overall, Clearscope has helped our content get more organic traffic. Which is why I continue to use the tool.

    Plus, adding relevant terms to your content can’t hurt your Google rankings (as long as you don’t shoehorn terms into your content just for the sake of it).

    But making your content more comprehensive can definitely help with SEO. Which, at least to me, makes Clearscope worth using.

    Clearscope Pricing

    My Clearscope review just wouldn’t be complete without talking about pricing.

    This section is going to be pretty short. Because Clearscope only has two pricing tiers: $350/month or an “Agency & Enterprise” plan.

    So yeah, not cheap. Especially when you compare Clearscope to other SEO tools on the market, like Ahrefs.

    Comparing Clearscope to Ahrefs or other SEO software suites isn’t a 1:1 comparison. Clearscope is pretty unique.

    That said, in terms of pricing, Clearscope is definitely on the higher end.

    Clearscope Review Bottom Line: Is Clearscope Worth $350 Per Month?

    If your SEO is a big part of your digital marketing strategy, then I would say Clearscope is worth the relatively steep price tag.

    At first glance, $350 may seem like a lot for an SEO tool. And it honestly is.

    But to me, a tool’s value is based on the ROI that it gives you.

    In other words: I rather spend $1k/month on an amazing tool than $50/month on a tool that doesn’t help me all that much.

    And if you feel that a boost in your organic traffic can generate significantly more than $350/month, then I would give Clearscope a shot.

    Plus, there aren’t any contracts or long-term commitments. You can use Clearscope for a month. See how it goes. If you don’t find that it’s improving your SEO, you can cancel.

    That said: if you’re just starting out. Or you have a limited SEO software budget. Or you’re new to SEO. Then I’d pass on Clearscope.

    If that describes you, then you’re better off optimizing your content with traditional on-page SEO best practices. And you’ll get 80-90% the same results as you would with an advanced optimization tool like Clearscope.

    But if SEO is a big part of your business, I recommend trying Clearscope.

    Now It’s Your Turn

    So that’s it for my review of Clearscope.

    Now I’d like to hear from you:

    Have you tried Clearscope before?

    If so, what was your experience with it? Good? Bad? Somewhere in between?

    Either way, let me know by leaving a comment below right now.

    The post Clearscope Review: Is This New SEO Tool Any Good? appeared first on Backlinko.

    Source: https://backlinko.com/clearscope-review

    Facebook’s Creator Studio gains a mobile companion

    Facebook’s Creator Studio has added a mobile companion. The insights dashboard for creators and publishers, which debuted globally in August 2018, is now available as a mobile app for both iOS and Android. Similar to the desktop hub, the Creator Studio app allows users to track how their content is performing across Facebook Pages, as well as publish, schedule and make adjustments to posts, respond to fan messages, and more.

    Facebook Director of Entertainment for Northern Europe Anna Higgs took the stage along with creator Ladbaby, who has over 4 million Facebook followers, to share the news of the new app’s launch at last week’s VidCon London.

    There are a few key areas where the app can be of use to creators and publishers, starting with its metrics and insights section. Here, users can analyze both Page and post-level insights, retention, and distribution metrics in order to adjust their strategies accordingly. For example, they’ll find content performance metrics like “1-minute views,” 3-second views,” and “avg. minutes viewed,” plus engagement metrics like comments and shares, and follower counts, earnings, and more.

    The app also serves as a mobile companion for viewing both published and scheduled posts, allowing creators to make quick adjustments like editing the video titles or descriptions. And they can use the app for deleting or expiring posts, rescheduling posts, or publishing drafts.

    From the inbox section, users can respond to incoming messages and comments while on the go.

    Creators can toggle between their different accounts during the same session, instead of having to log out and back in as a different user. This could be helpful for those who have a large social media presence, as well as those whose business involves supporting multiple creator pages.

    The Creator Studio app will also send out immediate notifications for key milestones and other important events.

    This isn’t the first time Facebook has offered a dedicated app for its creator community. The company in 2017 debuted a Creator app, that had also offered a unified inbox and analytics, among other things. But that app was shut down early last year, and creators were pointed towards the Pages Manager app or desktop version of Creator Studio instead. Before that, Facebook had offered a Mentions app that was only available for verified public figures and Pages.

    The new Creator Studio app isn’t a direct replacement for the shuttered Creator app, as it sports a similar, though not identical feature set and a new user interface. It also notably lacks Instagram integration and the ability to upload and post new content — the latter which is contributing to poor user reviews, following the app’s launch. Many complain there’s too much overlap with the Pages Monitor app, as well. But the missing features are something Facebook will likely address in the future, as it rolls out more functionality to the app.

    It’s worth noting that Facebook’s desktop hub and app sport a name similar to YouTube’s service for creators — YouTube Studio, rebranded from YouTube Creator Studio in 2017. By including both “studio” and “creator” in the new app’s name, it will perform better in App Store search results — including those that appear when someone searches for the YouTube Studio app for creators. That reflects the competitive nature between the two companies, both hungry to woo video creator talent.

    Facebook’s new app is a free download on iOS and Android.


    Source: https://techcrunch.com/2020/02/24/facebooks-creator-studio-gains-a-mobile-companion/