Slack off — send videos instead with $11M-funded Loom

If a picture is worth a thousand words, how many emails can you replace with a video? As offices fragment into remote teams, work becomes more visual and social media makes us more comfortable on camera, it’s time for collaboration to go beyond text. That’s the idea behind Loom, a fast-rising startup that equips enterprises with instant video messaging tools. In a click, you can film yourself or narrate a screenshare to get an idea across in a more vivid, personal way. Instead of scheduling a video call, employees can asynchronously discuss projects or give “stand-up” updates without massive disruptions to their workflow.

In the 2.5 years since launch, Loom has signed up 1.1 million users from 18,000 companies. And that was just as a Chrome extension. Today Loom launches its PC and Mac apps that give it a dedicated presence in your digital work space. Whether you’re communicating across the room or across the globe, “Loom is the next best thing to being there,” co-founder Shahed Khan tells me.

Now Loom is ready to spin up bigger sales and product teams thanks to an $11 million Series A led by Kleiner Perkins . The firm’s partner Ilya Fushman, formally Dropbox’s head of product and corporate development, will join Loom’s board. He’ll shepherd Loom through today’s launch of its $10 per month per user Pro version that offers HD recording, calls-to-action at the end of videos, clip editing, live annotation drawings and analytics to see who actually watched like they’re supposed to.

“We’re ditching the suits and ties and bringing our whole selves to work. We’re emailing and messaging like never before, but though we may be more connected, we’re further apart,” Khan tells me. “We want to make it very easy to bring the humanity back in.”

Loom co-founder Shahed Khan

But back in 2016, Loom was just trying to survive. Khan had worked at Upfront Ventures after a stint as a product designer at website builder Weebly. He and two close friends, Joe Thomas and Vinay Hiremath, started Opentest to let app makers get usability feedback from experts via video. But after six months and going through the NFX accelerator, they were running out of bootstrapped money. That’s when they realized it was the video messaging that could be a business as teams sought to keep in touch with members working from home or remotely.

Together they launched Loom in mid-2016, raising a pre-seed and seed round amounting to $4 million. Part of its secret sauce is that Loom immediately starts uploading bytes of your video while you’re still recording so it’s ready to send the moment you’re finished. That makes sharing your face, voice and screen feel as seamless as firing off a Slack message, but with more emotion and nuance.

“Sales teams use it to close more deals by sending personalized messages to leads. Marketing teams use Loom to walk through internal presentations and social posts. Product teams use Loom to capture bugs, stand ups, etc.,” Khan explains.

Loom has grown to a 16-person team that will expand thanks to the new $11 million Series A from Kleiner, Slack, Cue founder Daniel Gross and actor Jared Leto that brings it to $15 million in funding. They predict the new desktop apps that open Loom to a larger market will see it spread from team to team for both internal collaboration and external discussions from focus groups to customer service.

Loom will have to hope that after becoming popular at a company, managers will pay for the Pro version that shows exactly how long each viewer watched. That could clue them in that they need to be more concise, or that someone is cutting corners on training and cooperation. It’s also a great way to onboard new employees. “Just watch this collection of videos and let us know what you don’t understand.” At $10 per month though, the same cost as Google’s entire GSuite, Loom could be priced too high.

Next Loom will have to figure out a mobile strategy — something that’s surprisingly absent. Khan imagines users being able to record quick clips from their phones to relay updates from travel and client meetings. Loom also plans to build out voice transcription to add automatic subtitles to videos and even divide clips into thematic sections you can fast-forward between. Loom will have to stay ahead of competitors like Vidyard’s GoVideo and Wistia’s Soapbox that have cropped up since its launch. But Khan says Loom looms largest in the space thanks to customers at Uber, Dropbox, Airbnb, Red Bull and 1,100 employees at HubSpot.

“The overall space of collaboration tools is becoming deeper than just email + docs,” says Fushman, citing Slack, Zoom, Dropbox Paper, Coda, Notion, Intercom, Productboard and Figma. To get things done the fastest, businesses are cobbling together B2B software so they can skip building it in-house and focus on their own product.

No piece of enterprise software has to solve everything. But Loom is dependent on apps like Slack, Google Docs, Convo and Asana. Because it lacks a social or identity layer, you’ll need to send the links to your videos through another service. Loom should really build its own video messaging system into its desktop app. But at least Slack is an investor, and Khan says “they’re trying to be the hub of text-based communication,” and the soon-to-be-public unicorn tells him anything it does in video will focus on real-time interaction.

Still, the biggest threat to Loom is apathy. People already feel overwhelmed with Slack and email, and if recording videos comes off as more of a chore than an efficiency, workers will stick to text. And without the skimability of an email, you can imagine a big queue of videos piling up that staffers don’t want to watch. But Khan thinks the ubiquity of Instagram Stories is making it seem natural to jump on camera briefly. And the advantage is that you don’t need a bunch of time-wasting pleasantries to ensure no one misinterprets your message as sarcastic or pissed off.

Khan concludes, “We believe instantly sharable video can foster more authentic communication between people at work, and convey complex scenarios and ideas with empathy.”


Source: https://techcrunch.com/2019/02/19/loom-video/

Advertisements

We Analyzed 912 Million Blog Posts. Here’s What We Learned About Content Marketing

We Analyzed 912 Million Blog Posts. Here's What We Learned About Content Marketing

We analyzed 912 million blog posts to better understand the world of content marketing right now.

Specifically, we looked at how factors like content format, word count and headlines correlate with social media shares and backlinks.

With the help of our data partner BuzzSumo, we uncovered some very interesting findings.

And now it’s time to share what we discovered.

Here is a Summary of Our Key Findings:

1. We found that long-form content gets an average of 77.2% more links than short articles. Therefore, long-form content appears to be ideal for backlink acquisition.

2. When it comes to social shares, longer content outperforms short blog posts. However, we found diminishing returns for articles that exceed 2,000 words.

3. The vast majority of online content gets few social shares and backlinks. In fact, 94% of all blog posts have zero external links.

4. A small percentage of “Power Posts” get a disproportionate amount of social shares. Specifically, 1.3% of articles generate 75% of all social shares.

5. We found virtually no correlation between backlinks and social shares. This suggests that there’s little crossover between highly-shareable content and content that people link to.

6. Longer headlines are correlated with more social shares. Headlines that are 14-17 words in length generate 76.7% more social shares than short headlines.

7. Question headlines (titles that end with a “?”) get 23.3% more social shares than headlines that don’t end with a question mark.

8. There’s no “best day” to publish a new piece of content. Social shares are distributed evenly among posts published on different days of the week.

9. Lists posts are heavily shared on social media. In fact, list posts get an average of 218% more shares than “how to” posts and 203% more shares than infographics.

10. Certain content formats appear to work best for acquiring backlinks. We found that “Why Posts”, “What Posts” and infographics received 25.8% more links compared to videos and “How-to” posts.

11. The average blog post gets 9.7x more shares than a post published on a B2B site. However, the distribution of shares and links for B2B and B2C publishers appears to be similar.

We have detailed data and information of our findings below.

Long-Form Content Generates More Backlinks Than Short Blog Posts

When it comes to acquiring backlinks, long-form content significantly outperforms short blog posts and articles.

Long-form content generates more backlinks than short blog posts

You may have seen other industry studies, like this one, that found a correlation between long-form content and first page Google rankings.

However, to our knowledge no one has investigated why longer content tends to perform so well. Does the Google algorithm inherently prefer long content? Or perhaps longer content is best at satisfying searcher intent.

While it’s impossible to draw any firm conclusions from our study, our data suggests that backlinks are at least part of the reason that long-form content tends to rank in Google’s search results.

Key Takeaway: Content that’s >3000 words gets an average of 77.2% more referring domain links compared to content that’s <1000 words.

The Ideal Content Length For Maximizing Social Shares Is 1,000-2,000 Words

According to our data, long-form content (>1000 words) generates significantly more social shares than short content (<1000 words).

However, our research indicates that there’s diminishing returns once you reach the 2,000-word mark.

The ideal content length for maximizing social media shares is 1,000 to 2,000 words

In other words, 1,000-2,000 words appears to be the “sweet spot” for maximizing shares on social media networks like Facebook, Twitter, Reddit and Pinterest.

In fact, articles between 1k-2k words get an average of 56.1% more social shares than content that’s <1000 words.

Key Takeaway: Content between 1k-2k words is ideal for generating social shares.

The Vast Majority of Content Gets Zero Links

It’s no secret that backlinks remain an extremely important Google ranking signal.

Google’s recently reiterated this fact in their “How Search Works” report.

Google – How search works

We found that actually getting these links is extremely difficult.

In fact, our data showed that 94% of the world’s content gets zero external links.

94% of content published gets zero external links

It’s fair to say that getting someone to link to your content is difficult. And we found that getting links from multiple websites is even more challenging.

In fact, only 2.2% of content generates links from multiple websites.

Only 2.2% of content generates links from multiple websites

Why is it so hard to get backlinks?

While it’s impossible to answer this question from our data alone, it’s likely due to a sharp increase in the amount of content that’s published every day.

For example, WordPress reports that 87 million posts were published on their platform in May 2018, which is a 47.1% increase compared to May 2016.

Number of posts published (WordPress)

That’s an increase of 27 million monthly blog posts in a 2 year span.

It appears that, due to the sharp rise in content produced, that building links from content is harder than ever.

A 2015 study published on the Moz blog concluded that, of the content in their sample, “75% had zero external links”. Again: our research from this study found that 94% of all content has zero external links. This suggests that getting links to your content is significantly harder compared to just a few years ago.

Key Takeaway: Building links through content marketing is more challenging than ever. Only 6% of the content in our sample had at least one external link.

A Small Number of “Power Posts” Get a Large Proportion of Shares

Our data shows that social shares aren’t evenly distributed. Not even close.

We found that a small number of outliers (“Power Posts”) receive the majority of the world’s social shares.

Specifically, 1.3% of articles get 75% of the social shares.

And a small subset of those Power Posts tend to get an even more disproportionate amount of shares.

In fact, 0.1% of articles in our sample got 50% of the total amount of social shares.

In other words, approximately half of all social shares go to an extremely small number (0.1%) of viral posts.

For example, this story about shoppers buying and returning clothes from ecommerce sites received 77.3 thousand Facebook shares.

This single article got more Facebook shares than the rest of the top 20 posts about ecommerce combined.

Key Takeaway: The majority of social shares are generated from a small number of posts. 75% of all social shares come from only 1.3% of published content.

There’s Virtually No Correlation Between Social Shares and Backlinks

We found no correlation between social shares and backlinks (Pearson correlation coefficient of 0.078).

In other words, content that receives a lot of links doesn’t usually get shared on social media.

(And vice versa)

And when content does get shared on social media, those shares don’t usually result in more backlinks.

This may surprise a lot of publishers as “Sharing your content on social media” is considered an SEO best practice. The idea being that social media helps your content get in front of more people, which increases the likelihood that someone will link to you.

While this makes sense in theory, our data shows that this doesn’t play out in the real world.

That’s because, as Steve Rayson put it: “People share and link to content for different reasons”.

So it’s important to create content that caters to your goals.

Do you want to go viral on Facebook? Then list posts might be your best bet.

Is your #1 goal to get more backlinks? Then you probably want to publish infographics and other forms of visual content.

We will outline the differences between highly-linkable and highly-shareable content below.

But for now, it’s important to note that there’s very little overlap between content that gets shared on social media and content that people link to.

Key Takeaway: There’s no correlation between social media shares and links.

Long Headlines are Correlated With High Levels of Social Sharing

Previous industry studies have found a relationship between “long” headlines and social shares.

Our data found a similar relationship. In fact, we discovered that “very long” headlines outperform short headlines by 76.7%:

<img class="" src="https://backlinko.c
Source: https://backlinko.com/content-study

Slack off. Send videos instead with $11M-funded Loom

If a picture is worth a thousand words, how many emails can you replace with a video? As offices fragment into remote teams, work becomes more visual, and social media makes us more comfortable on camera, it’s time for collaboration to go beyond text. That’s the idea behind Loom, a fast-rising startup that equips enterprises with instant video messaging tools. In a click, you can film yourself or narrate a screenshare to get an idea across in a more vivid, personal way. Instead of scheduling a video call, employees can asynchronously discuss projects or give ‘stand-up’ updates without massive disruptions to their workflow.

In the 2.5 years since launch, Loom has signed up 1.1 million users from 18,000 companies. And that was just as a Chrome extension. Today Loom launches its PC and Mac apps that give it a dedicated presence in your digital workspace. Whether you’re communicating across the room or across the globe, “Loom is the next best thing to being there” co-founder Shahed Khan tells me.

Now Loom is ready to spin up bigger sales and product teams thanks to an $11 million Series A led by Kleiner Perkins . The firm’s partner Ilya Fushman, formally Dropbox’s head of business and corporate development, will join Loom’s board. He’ll shepherd Loom through today’s launch of its $10 per month per user Pro version that offers HD recording, calls-to-action at the end of videos, clip editing, live annotation drawings, and analytics to see who actually watched like they’re supposed to.

“We’re ditching the suits and ties and bringing our whole selves to work. We’re emailing and messaging like never before. but though we may be more connected, we’re further apart” Khan tells me. “We want to make it very easy to bring the humanity back in.”

Loom co-founder Shahed Khan

But back in 2016, Loom was just trying to survive. Khan had worked at Upfront Ventures after a stint as a product designer at website builder Weebly. Him and two close friends, Joe Thomas and Vinay Hiremath, started Opentest to let app makers get usabilty feedback from experts via video. But after six months and going through the NFX accelerator, they were running out of bootstrapped money. That’s when they realized it was the video messaging that could be a business as teams sought to keep in touch with members working from home or remotely.

Together they launched Loom in mid-2016, raising a pre-seed and seed round amounting to $4 million. Part of its secret sauce is that Loom immediately starts uploading bytes of your video while you’re still recording so it’s ready to send the moment you’re finished. That makes sharing your face, voice and screen feel as seamless as firing off a Slack message, but with more emotion and nuance.

“Sales teams use it to close more deals by sending personalized messages to leads. Marketing teams use Loom to walk through internal presentations and social posts. Product teams use Loom to capture bugs, stand ups, etc” Khan explains.

Loom has grown to a 16-person team that will expand thanks to the new $11 million Series A from Kleiner, Slack, Cue founder Daniel Gross, and actor Jared Leto that brings it to $15 million in funding. They predict the new desktop apps that open Loom to a larger market will see it spread from team to team for both internal collaboration and external discussions from focus groups to customer service.

Loom will have to hope that after becoming popular at a company, managers will pay for the Pro version that shows exactly how long each viewer watched for. That could clue them in that they need to be more concise, or that someone is cutting corners on training and cooperation. It’s also a great way to onboard new employees. ‘Just watch this collection of videos and let us know what you don’t understand.’

Next Loom will have to figure out a mobile strategy — something that’s surprisingly absent. Khan imagines users being able to record quick clips from their phones to relay updates from travel and client meetings. Loom also plans to build out voice transcription to add automatic subtitles to videos and even divide clips into thematic sections you can fast-forward between. Loom will have to stay ahead of competitors like Vidyard’s GoVideo and Wistia’s Soapbox that have cropped up since its launch. But Khan says Loom looms largest in the space thanks to customers at Uber, Dropbox, Airbnb, Red Bull, and 1100 employees at Hubspot.

“The overall space of collaboration tools is becoming deeper than just email + docs” says Fushman, citing Slack, Zoom, Dropbox Paper, Coda, Notion, Intercom, Productboard, and Figma. To get things done the fastest, businesses are cobbling together B2B software so they can skip building it in-house and focus on their own product.

No piece of enterprise software has to solve everything. But Loom is dependent on apps like Slack, Google Docs, Convo, and Asana. Since it lacks a social or identity layer, you’ll need to send the links to your videos through another service. Loom should really build its own video messaging system into its desktop app. But at least Slack is an investor, and Khan says “they’re trying to be the hub of text-based communication” and the soon-to-be-public unicorn tells him anything it does in video will focus on real-time interaction.

Still, the biggest threat to Loom is apathy. People already feel overwhelmed with Slack and email, and if recording videos comes off as more of a chore than an efficiency, workers will stick to text. But Khan thinks the ubiquity of Instagram Stories is making it seem natural to jump on camera briefly. And the advantage is that you don’t need a bunch of time-wasting pleasantries to ensure no one misinterprets your message as sarcastic or pissed off.

Khan concludes “We believe instantly sharable video can foster more authentic communication between people at work, and convey complex scenarios and ideas with empathy.”


Source: https://techcrunch.com/2019/02/19/loom-video/

Twitter names first international markets to get checks on political advertisers

Twitter has announced it’s expanding checks on political advertisers outside the U.S. to also cover Australia, India and all the member states of the European Union.

This means anyone wanting to run political ads on its platform in those regions will first need to go through its certification process to prove their identity and certify a local location via a verification letter process.

Enforcement of the policies will kick in in the three regions on March 11, Twitter said today in a blog post. “Political advertisers must apply now for certification and go through the every step of the process,” it warns.

The company’s ad guidelines, which were updated last year, are intended to make it harder for foreign entities to target elections by adding a requirement that political advertisers self-identify and certify they’re locally based.

A Twitter spokeswoman told us that advertiser identity requirements include providing a copy of a national ID, and for candidates and political parties specifically it requires an official copy of their registration and national election authority.

The company’s blog post does not explain why it’s selected the three international regions it has named for its first expansion of political checks outside the U.S. But they do all have elections upcoming in the next months.

Elections to the EU parliament take play in May, while India’s general elections are expected to take place in April and May. Australia is also due to hold a federal election by May 2019.

Twitter has been working on ad transparency since 2017, announcing the launch of a self-styled Advertising Transparency Center back in fall that year, following political scrutiny over the role social media platforms in spreading Kremlin-backed disinformation during the 2016 US presidential election. It went on to launch the center in June 2018.

It also announced updated guidelines for political advertisers in May 2018 which also came into effect last summer, ahead of the U.S. midterms.

The ad transparency hub lets anyone (not just Twitter users) see all ads running on its platform, including the content/creative; how long ads have been running; and any ads specifically targeted at them if they are a user. Ads can also be reported to Twitter as inappropriate via the Center.

Political/electioneering ads get a special section that also includes information on who’s paying for the ad, how much they’ve spent, impressions per tweet and demographic targeting.

Though initially the political transparency layer only covered U.S. ads.

Now, more than half a year on, Twitter is preparing to expand the same system of checks to its first international regions.

In countries where it has implemented the checks, organizations buying political ads on its platform are also required to comply with a stricter set of rules for how they present their profiles to enforce a consistent look vis-a-vis how they present themselves online elsewhere — to try to avoid political advertisers trying to pass themselves off as something they’re not.

These consistency rules will apply to those wanting to run political ads in Europe, India and Australia from March. Twitter will also require political advertisers in the regions include a link to a website with valid contact info in their Twitter bio.

While those political advertisers with Twitter handles not related to their certified entity must also include a disclaimer in their bio stating the handle is “owned by”  the certified entity name.

The company’s move to expand political ad checks outside the U.S. is certainly welcome but it does highlight how piecemeal such policies remain with many more international regions with upcoming elections still lacking such checks — nor even a timeline to get them.

Including countries with very fragile democracies where political disinformation could be a hugely potent weapon.

Indonesia, which is a major market for Twitter, is due to hold a general election in April, for instance. The Philippines is also due to hold a general election in May. While Thailand has an election next month.

We asked Twitter whether it has any plans to roll out political ad checks in these three markets ahead of their key votes but the company declined to make a statement on why it had focused on the EU, Australia and India first.

A spokeswoman did tell us that it will be expanding the policy and enforcement globally in the future, though she would not provide a timeline for any further international expansion. 


Source: https://techcrunch.com/2019/02/19/twitter-names-first-international-markets-to-get-checks-on-political-advertisers/

How to Think About SEO

seo

Don’t you hate how it takes forever to get results when it comes to SEO?

Everyone says it takes 6 months to a year and even in some cases many years to see results.

Well, I have some bad news and some good news for you.

Let’s start with the bad news…

SEO is a long-term strategy. It’s not about doing it for a few months and forgetting about it. And if you stop focusing on it eventually your competitors will outrank you.

And now let’s get on to the good news.

You can get results in the short run. You may not get all of the results you want right away, and you may not rank for your ideal keywords, but that doesn’t mean you can’t get results within 90, 60, and even potentially 30 days.

So how do you get results within a few months?

Well first, let’s rewire your brain so you think about SEO in the correct way.

SEO isn’t just content and links

If you want to rank number 1 on Google, what do you need?

Well, the data shows you need to write lengthy content. Because the average web page that ranks on page 1 of Google contains 1,890 words.

word count

And of course, what’s content without links? Because the 2 most important factors that affect rankings according to the SEO industry are domain level links and page level links.

moz links

But here is the thing: SEO isn’t what it used to be. Until 2010, you used to be able to add keywords in your meta tags and you would get rankings within a few months.

And as the web got more crowded, you could then get results by doing the same old thing but you also had to build a few links. That worked really well between 2010 and 2013.

As more businesses popped up, everyone started focusing on content marketing. That was the hot thing. From 2013 to 2017, if you created tons of text-based content, got a few social shares, and picked up a few natural backlinks you could dominate Google.

But now, there are over a billion blogs if you include WordPress.com, Medium, and Tumblr.

That means Google has their choice when it comes to determining what sites to rank at the top.

In other words, just because you write lengthy content or build backlinks it doesn’t mean you are going to ranks. Millions of other sites do that as well.

And even if you got in early and your site is 10 years old, it’s no longer that easy to dominate the web.

Just look at sites like Wisegeek. They used to dominate the web as it’s a site with thousands of informative articles.

And now look at their traffic

wisegeek

According to Ubersuggest, they get roughly 49,211 visitors a month from Google within the United States. It may seem like a lot, but their traffic is continually going down.

When I met the founder years ago it was in the millions… but not anymore.

It doesn’t even matter that the site has 8,761,524 backlinks from 74,147 referring domains.

wisegeek links

Now you may make the argument that Wisegeek doesn’t have the best content. But I have tons of examples of sites with amazing content that have the same issues.

For example, Derek Halpern from Social Triggers creates great content. Just go check out some of his blog posts if you don’t believe me.

But let’s dive into his traffic stats

social triggers

According to Ubersuggest, he gets roughly 26,640 visitors a month from Google in the United States and he has 993,790 backlinks from 5,678 referring domains.

And he ranks for some great terms. Just look at the top pages he is ranking for with terms like “how to become more confident.”

social triggers top pages

But even Social Triggers has struggled to keep their traffic over time. It’s nothing to do with Derek, he’s a smart entrepreneur, but he decided to quit and focus on his new venture Truvani, which has been doing well.

In other words, content and links don’t guarantee success.

So, what’s the best way to get rankings these days?

You have to go after low hanging fruit.

Sure, you need content, you need links, and you need to optimize for the other 198 factors Google keeps track of the optimal amount of traffic. 

But it’s UNREALISTIC for you to do everything. Even if you hire an SEO agency to help you out.

And there is no way you can wait 12 months to get results from an SEO campaign.

Which means your only solution is going after the low hanging fruit.

Now I wish I could tell you the exact low hanging fruit to go after, but it varies for every site. What I can do is show some of the simple tactics that have worked for me and are easy to implement.

Strategy #1: Don’t put dates in your URL

I used to have dates in my URL because it was a default option from WordPress. I didn’t think twice about it. But the moment I removed the dates from my posts, my search traffic went up by 58%.

Best of all, it only took 30 days to get the boost in traffic.
Source: https://neilpatel.com/blog/how-to-think-about-seo/

Instagram’s fundraiser stickers could lure credit card numbers

Mark Zuckerberg recently revealed that commerce is a huge part of the 2019 roadmap for Facebook’s family of apps. But before people can easily buy things from Instagram etc, Facebook needs their credit card info on file. That’s a potentially lucrative side effect of Instagram’s plan to launch a Fundraiser sticker in 2019. Facebook’s own Donate buttons have raised $1 billion, and bringing them to Instagram’s 1 billion users could do a lot of good while furthering Facebook’s commerce strategy.

New code and imagery dug out of Instagram’s Android app reveals how the Fundraiser stickers will allow you to search for non-profits and add a Donate button for them to your Instagram Story. After you’ve donated to something once, Instagram could offer instant checkout on stuff you want to buy using the same payment details.

Back in 2013 when Facebook launched its Donate button, I suggested that it could add a “remove credit card after checkout” option to its fundraisers if it wanted to make it clear that the feature was purely altruistic. Facebook never did that. You still need to go into your payment settings or click through the See Receipt option after donating and then edit your account settings to remove your credit card. We’ll see if Instagram is any different. We’ve also asked whether Instagrammers will be able to raise money for personal causes, which would make it more of a competitor to GoFundMe — which has sadly become the social safety net for many facing healthcare crises.

Facebook mentioned at its Communities Summit earlier this month that it’d be building Instagram Fundraiser stickers, but the announcement was largely overshadowed by the company’s reveal of new Groups features. This week, TechCrunch tipster Ishan Agarwal found code in the Instagram Android app detailing how users will be able search for non-profits or browse collections of Suggested charities and ones they follow. They can then overlay a Donate button sticker on their Instagram Story that their followers can click through to contribute.

We then asked reverse engineering specialist Jane Manchun Wong to take a look, and she was able to generate the screenshots seen above that show a green heart icon for the Fundraiser sticker plus the non-profit search engine. A Facebook’s spokespeople tell me that “We are in early stages and working hard to bring this experience to our community . . . Instagram is all about bringing you closer to the people and things you love, and a big part of that is showing support for and bringing awareness to meaningful communities and causes. Later this year, people will be able to raise money and help support nonprofits that are important to them through a donation sticker in Instagram Stories. We’re excited to bring this experience to our community and will share more updates in the coming months.”

Zuckerbeg said during the Q4 2018 earnings call last month that “In Instagram, one of the areas I’m most excited about this year is commerce and shopping . . . there’s also a very big opportunity in basically enabling the transactions and making it so that the buying experience is good”. Streamlining those transactions through saved payment details means more people will complete their purchase rather than abandoning their cart. Facebook CFO David Wehner noted on the call that “Continuing to build good advertising products for our e-commerce clients on the advertising side will be a more important contributor to revenue in the foreseeable future”. Even though Facebook isn’t charging a fee on transactions, powering higher commerce conversion rates convinces merchants to buy more ads on the platform.

With all the talk of envy spiraling, phone addiction, bullying, and political propaganda, enabling donations is at least one way Instagram can prove it’s beneficial to the world. Snapchat lacks formal charity features, and Twitter appears to have ended its experiment allowing non-profits to tweet donate buttons. Despite all the flack Facebook rightfully takes, the company has shown a strong track record with philanthropy that mirrors Zuckerberg’s own $47 billion commitment through the Chan Zuckerberg Initiative. And if having some relatively benign secondary business benefit speeds companies towards assisting non-profits, that’s a trade-off we should be willing to embrace.


Source: https://techcrunch.com/2019/02/18/instagram-fundraisers/

YouTube under fire for recommending videos of kids with inappropriate comments

More than a year on from a child safety content moderation scandal on YouTube and it takes just a few clicks for the platform’s recommendation algorithms to redirect a search for “bikini haul” videos of adult women towards clips of scantily clad minors engaged in body contorting gymnastics or taking an icebath or ice lolly sucking “challenge”.

A YouTube creator called Matt Watson flagged the issue in a critical Reddit post, saying he found scores of videos of kids where YouTube users are trading inappropriate comments and timestamps below the fold, denouncing the company for failing to prevent what he describes as a “soft-core pedophilia ring” from operating in plain sight on its platform.

He has also posted a YouTube video demonstrating how the platform’s recommendation algorithm pushes users into what he dubs a pedophilia “wormhole”, accusing the company of facilitating and monetizing the sexual exploitation of children.

We were easily able to replicate the YouTube algorithm’s behavior that Watson describes in a history-cleared private browser session which, after clicking on two videos of adult women in bikinis, suggested we watch a video called “sweet sixteen pool party”.

Clicking on that led YouTube’s side-bar to serve up multiple videos of prepubescent girls in its ‘up next’ section where the algorithm tees-up related content to encourage users to keep clicking.

Videos we got recommended in this side-bar included thumbnails showing young girls demonstrating gymnastics poses, showing off their “morning routines”, or licking popsicles or ice lollies.

Watson said it was easy for him to find videos containing inappropriate/predatory comments, including sexually suggestive emoji and timestamps that appear intended to highlight, shortcut and share the most compromising positions and/or moments in the videos of the minors.

We also found multiple examples of timestamps and inappropriate comments on videos of children that YouTube’s algorithm recommended we watch.

Some comments by other YouTube users denounced those making sexually suggestive remarks about the children in the videos.

Back in November 2017 several major advertisers froze spending on YouTube’s platform after an investigation by the BBC and the Times discovered similarly obscene comments on videos of children.

Earlier the same month YouTube was also criticized over low quality content targeting kids as viewers on its platform.

The company went on to announce a number of policy changes related to kid-focused video, including saying it would aggressively police comments on videos of kids and that videos found to have inappropriate comments about the kids in them would have comments turned off altogether.

Some of the videos of young girls that YouTube recommended we watch had already had comments disabled — which suggests its AI had previously identified a large number of inappropriate comments being shared (on account of its policy of switching off comments on clips containing kids when comments are deemed “inappropriate”) — yet the videos themselves were still being suggested for viewing in a test search that originated with the phrase “bikini haul”.

Watson also says he found ads being displayed on some videos of kids containing inappropriate comments, and claims that he found links to child pornography being shared in YouTube comments too.

We were unable to verify those findings in our brief tests.

We asked YouTube why its algorithms skew towards recommending videos of minors, even when the viewer starts by watching videos of adult women, and why inappropriate comments remain a problem on videos of minors more than a year after the same issue was highlighted via investigative journalism.

The company sent us the following statement in response to our questions:

Any content — including comments — that endangers minors is abhorrent and we have clear policies prohibiting this on YouTube. We enforce these policies aggressively, reporting it to the relevant authorities, removing it from our platform and terminating accounts. We continue to invest heavily in technology, teams and partnerships with charities to tackle this issue. We have strict policies that govern where we allow ads to appear and we enforce these policies vigorously. When we find content that is in violation of our policies, we immediately stop serving ads or remove it altogether.

A spokesman for YouTube also told us it’s reviewing its policies in light of what Watson has highlighted, adding that it’s in the process of reviewing the specific videos and comments featured in his video — specifying also that some content has been taken down as a result of the review.

Although the spokesman emphasized that the majority of the videos flagged by Watson are innocent recordings of children doing everyday things. (Though of course the problem is that innocent content is being repurposed and time-sliced for abusive gratification and exploitation.)

The spokesman added that YouTube works with the National Center for Missing and Exploited Children to report accounts found making inappropriate comments about kids to law enforcement.

In wider discussion about the issue the spokesman told us that determining context remains a challenge for its AI moderation systems.

On the human moderation front he said the platform now has around 10,000 human reviewers tasked with assessing content flagged for review.

The volume of video content uploaded to YouTube is around 400 hours per minute, he added.

There is still very clearly a massive asymmetry around content moderation on user generated content platforms, with AI poorly suited to plug the gap given ongoing weakness in understanding context, even as platforms’ human moderation teams remain hopelessly under-resourced and outgunned vs the scale of the task.

Another key point which YouTube failed to mention is the clear tension between advertising-based business models that monetize content based on viewer engagement (such as its own), and content safety issues that need to carefully consider the substance of the content and the context it’s been consumed in.

It’s certainly not the first time YouTube’s recommendation algorithms have been called out for negative impacts. In recent years the platform has been accused of automating radicalization by pushing viewers towards extremist and even terrorist content — which led YouTube to announce another policy change in 2017 related to how it handles content created by known extremists.

The wider societal impact of algorithmic suggestions that inflate conspiracy theories and/or promote bogus, anti-factual health or scientific content have also been repeatedly raised as a concern — including on YouTube.

And only last month YouTube said it would reduce recommendations of what it dubbed “borderline content” and content that “could misinform users in harmful ways”, citing examples such as videos promoting a fake miracle cure for a serious illness, or claiming the earth is flat, or making “blatantly false claims” about historic events such as the 9/11 terrorist attack in New York.

“While this shift will apply to less than one percent of the content on YouTube, we believe that limiting the recommendation of these types of videos will mean a better experience for the YouTube community,” it wrote then. “As always, people can still access all videos that comply with our Community Guidelines and, when relevant, these videos may appear in recommendations for channel subscribers and in search results. We think this change strikes a balance between maintaining a platform for free speech and living up to our responsibility to users.”

YouTube said that change of algorithmic recommendations around conspiracy videos would be gradual, and only initially affect recommendations on a small set of videos in the US.

It also noted that implementing the tweak to its recommendation engine would involve both machine learning tech and human evaluators and experts helping to train the AI systems.

“Over time, as our systems become more accurate, we’ll roll this change out to more countries. It’s just another step in an ongoing process, but it reflects our commitment and sense of responsibility to improve the recommendations experience on YouTube,” it added.

It remains to be seen whether YouTube will expand that policy shift and decide it must exercise greater responsibility in how its platform recommends and serves up videos of children for remote consumption in the future.

Political pressure may be one motivating force, with momentum building for regulation of online platforms — including calls for Internet companies to face clear legal liabilities and even a legal duty care towards users vis-a-vis the content they distribute and monetize.

For example UK regulators have made legislating on Internet and social media safety a policy priority — with the government due to publish a White Paper setting out its plans for ruling platforms this winter.


Source: https://techcrunch.com/2019/02/18/youtube-under-fire-for-recommending-videos-of-kids-with-inappropriate-comments/

Build a Search Intent Dashboard to Unlock Better Opportunities

Posted by scott.taft

We’ve been talking a lot about search intent this week, and if you’ve been following along, you’re likely already aware of how “search intent” is essential for a robust SEO strategy. If, however, you’ve ever laboured for hours classifying keywords by topic and search intent, only to end up with a ton of data you don’t really know what to do with, then this post is for you.

I’m going to share how to take all that sweet keyword data you’ve categorized, put it into a Power BI dashboard, and start slicing and dicing to uncover a ton insights — faster than you ever could before.

Building your keyword list

Every great search analysis starts with keyword research and this one is no different. I’m not going to go into excruciating detail about how to build your keyword list. However, I will mention a few of my favorite tools that I’m sure most of you are using already:

  • Search Query Report — What better place to look first than the search terms already driving clicks and (hopefully) conversions to your site.
  • Answer The Public — Great for pulling a ton of suggested terms, questions and phrases related to a single search term.
  • InfiniteSuggest — Like Answer The Public, but faster and allows you to build based on a continuous list of seed keywords.
  • MergeWords — Quickly expand your keywords by adding modifiers upon modifiers.
  • Grep Words — A suite of keyword tools for expanding, pulling search volume and more.

Please note that these tools are a great way to scale your keyword collecting but each will come with the need to comb through and clean your data to ensure all keywords are at least somewhat relevant to your business and audience.

Once I have an initial keyword list built, I’ll upload it to STAT and let it run for a couple days to get an initial data pull. This allows me to pull the ‘People Also Ask’ and ‘Related Searches’ reports in STAT to further build out my keyword list. All in all, I’m aiming to get to at least 5,000 keywords, but the more the merrier.

For the purposes of this blog post I have about 19,000 keywords I collected for a client in the window treatments space.

Categorizing your keywords by topic

Bucketing keywords into categories is an age-old challenge for most digital marketers but it’s a critical step in understanding the distribution of your data. One of the best ways to segment your keywords is by shared words. If you’re short on AI and machine learning capabilities, look no further than a trusty Ngram analyzer. I love to use this Ngram Tool from guidetodatamining.com — it ain’t much to look at, but it’s fast and trustworthy.

After dropping my 19,000 keywords into the tool and analyzing by unigram (or 1-word phrases), I manually select categories that fit with my client’s business and audience. I also make sure the unigram accounts for a decent amount of keywords (e.g. I wouldn’t pick a unigram that has a count of only 2 keywords).

Using this data, I then create a Category Mapping table and map a unigram, or “trigger word”, to a Category like the following:

You’ll notice that for “curtain” and “drapes” I mapped both to the Curtains category. For my client’s business, they treat these as the same product, and doing this allows me to account for variations in keywords but ultimately group them how I want for this analysis.

Using this method, I create a Trigger Word-Category mapping based on my entire dataset. It’s possible that not every keyword will fall into a category and that’s okay — it likely means that keyword is not relevant or significant enough to be accounted for.

Creating a keyword intent map

Similar to identifying common topics by which to group your keywords, I’m going to follow a similar process but with the goal of grouping keywords by intent modifier.

Search intent is the end goal of a person using a search engine. Digital marketers can leverage these terms and modifiers to infer what types of results or actions a consumer is aiming for.

For example, if a person searches for “white blinds near me”, it is safe to infer that this person is looking to buy white blinds as they are looking for a physical location that sells them. In this case I would classify “near me” as a “Transactional” modifier. If, however, the person searched “living room blinds ideas” I would infer their intent is to see images or read blog posts on the topic of living room blinds. I might classify this search term as being at the “Inspirational” stage, where a person is still deciding what products they might be interested and, therefore, isn’t quite ready to buy yet.

There is a lot of research on some generally accepted intent modifiers in search and I don’t intent to reinvent the wheel. This handy guide (originally published in STAT) provides a good review of intent modifiers you can start with.

I followed the same process as building out categories to build out my intent mapping and the result is a table of intent triggers and their corresponding Intent stage.

Intro to Power BI

There are tons of resources on how to get started with the free tool Power BI, one of which is from own founder Will Reynold’s video series on using Power BI for Digital Marketing. This is a great place to start if you’re new to the tool and its capabilities.

Note: it’s not about the tool necessarily (although Power BI is a super powerful one). It’s more about being able to look at all of this data in one place and pull insights from it at speeds which Excel just won’t give you. If you’re still skeptical of trying a new tool like Power BI at the end of this post, I urge you to get the free download from Microsoft and give it a try.

Setting up your data in Power BI

Power BI’s power comes from linking multiple datasets together based on common “keys.” Think back to your Microsoft Access days and this should all start to sound familiar.

Step 1: Upload your data sources

First, open Power BI and you’ll see a button called “Get Data” in the top ribbon. Click that and then select the data format you want to upload. All of my data for this analysis is in CSV format so I will select the Text/CSV option for all of my data sources. You have to follow these steps for each data source. Click “Load” for each data source.

Step 2: Clean your data

In the Power BI ribbon menu, click the button called “Edit Queries.” This will open the Query Editor where we will make all of our data transformations.

The main things you’ll
want to do in the Query Editor are the following:

  • Make sure all data formats make sense (e.g. keywords are formatted as text, numbers are formatted as decimals or whole numbers).
  • Rename columns as needed.
  • Create a domain column in your Top 20 report based on the URL column.

Close and apply your
changes by hitting the “Edit Queries” button, as seen above.

Step 3: Create relationships between data sources

On the left side of Power BI is a vertical bar with icons for different views. Click the third one to see your relationships view.

In this view, we are going to connect all data sources to our ‘Keywords Bridge’ table by clicking and dragging a line from the field ‘Keyword’ in each table and to ‘Keyword’ in the ‘Keywords Bridge’ table (note that for the PPC Data, I have connected ‘Search Term’ as this is the PPC equivalent of a keyword, as we’re using here).

The last thing we need to do for our relationships is double-click on each line to ensure the following options are selected for each so that our dashboard works properly:

  • The cardinality is Many to 1
  • The relationship is “active”
  • The cross filter direction is set to “both”

We are now ready to start building our Intent Dashboard and analyzing our data.

Building the search intent dashboard

In this section I’ll walk you through each visual in the Search Intent Dashboard (as seen below):

Top domains by count of keywords

Visual type: Stacked Bar Chart visual

Axis: I’ve nested URL under Domain so I can drill down to see this same breakdown by URL for a specific Domain

Value: Distinct count of keywords

Legend: Result Types

Filter: Top 10 filter on Domains by count of distinct keywords

Keyword breakdown by result type

Visual type: Donut chart

Legend: Result Types

Value: Count of distinct keywords, shown as Percent of grand total

Metric Cards

Sum of Distinct MSV

Because the Top 20 report shows each keyword 20 times, we need to create a calculated measure in Power BI to only sum MSV for the unique list of keywords. Use this formula for that calculated measure:

Sum Distinct MSV = SUMX(DISTINCT('Table'[Keywords]), FIRSTNONBLANK('Table'[MSV], 0))

Keywords

This is just a distinct count of keywords

Slicer: PPC Conversions

Visual type: Slicer

Drop your PPC Conversions field into a slicer and set the format to “Between” to get this nifty slider visual.

Tables

Visual type: Table or Matrix (a matrix allows for drilling down similar to a pivot table in Excel)

Values: Here I have Category or Intent Stage and then the distinct count of keywords.

Pulling insights from your search intent dashboard

This dashboard is now a Swiss Army knife of data that allows you to slice and dice to your heart’s content. Below are a couple examples of how I use this dashboard to pull out opportunities and insights for my clients.

Where are competitors winning?

With this data we can quickly see who the top competing domains are, but what’s more valuable is seeing who the competitors are for a particular intent stage and category.

I start by filtering to the “Informational” stage, since it represents the most keywords in our dataset. I also filter to the top category for this intent stage which is “Blinds”. Looking at my Keyword Count card, I can now see that I’m looking at a subset of 641 keywords.

Note: To filter multiple visuals in Power BI, you need to press and hold the “Ctrl” button each time you click a new visual to maintain all the filters you clicked previously.

The top competing subdomain here is videos.blinds.com with visibility in the top 20 for over 250 keywords, most of which are for video results. I hit ctrl+click on the Video results portion of videos.blinds.com to update the keywords table to only keywords where videos.blinds.com is ranking in the top 20 with a video result.

From all this I can now say that videos.blinds.com is ranking in the top 20 positions for about 30 percent of keywords that fall into the “Blinds” category and the “Informational” intent stage. I can also see that most of the keywords here start with “how to”, which tells me that most likely people searching for blinds in an informational stage are looking for how to instructions and that video may be a desired content format.

Where should I focus my time?

Whether you’re in-house or at an agency, time is always a hit commodity. You can use this dashboard to quickly identify opportunities that you should be prioritizing first — opportunities that can guarantee you’ll deliver bottom-line results.

To find these bottom-line results, we’re going to filter our data using the PPC conversions slicer so that our data only includes keywords that have converted at least once in our PPC campaigns.

Once I do that, I can see I’m working with a pretty limited set of keywords that have been bucketed into intent stages, but I can continue by drilling into the “Transactional” intent stage because I want to target queries that are linked to a possible purchase.

Note: Not every keyword will fall into an intent stage if it doesn’t meet the criteria we set. These keywords will still appear in the data, but this is the reason why your total keyword count might not always match the total keyword count in the intent stages or category tables.

From there I want to focus on those “Transactional” keywords that are triggering answer boxes to make sure I have good visibility, since they are converting for me on PPC. To do that, I filter to only show keywords triggering answer boxes. Based on these filters I can look at my keyword table and see most (if not all) of the keywords are “installation” keywords and I don’t see my client’s domain in the top list of competitors. This is now an area of focus for me to start driving organic conversions.

Wrap up

I’ve only just scratched the surface — there’s tons that can can be done with this data inside a tool like Power BI. Having a solid data set of keywords and visuals that I can revisit repeatedly for a client and continuously pull out opportunities to help fuel our strategy is, for me, invaluable. I can work efficiently without having to go back to keyword tools whenever I need an idea. Hopefully you find this makes building an intent-based strategy more efficient and sound for your business or clients.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Source: https://moz.com/blog/build-a-search-intent-dashboard-to-unlock-better-opportunities

Detecting Link Manipulation and Spam with Domain Authority

Posted by rjonesx.

Over 7 years ago, while still an employee at Virante, Inc. (now Hive Digital), I wrote a post on Moz outlining some simple methods for detecting backlink manipulation by comparing one’s backlink profile to an ideal model based on Wikipedia. At the time, I was limited in the research I could perform because I was a consumer of the API, lacking access to deeper metrics, measurements, and methodologies to identify anomalies in backlink profiles. We used these techniques in spotting backlink manipulation with tools like Remove’em and Penguin Risk, but they were always handicapped by the limitations of consumer facing APIs. Moreover, they didn’t scale. It is one thing to collect all the backlinks for a site, even a large site, and judge every individual link for source type, quality, anchor text, etc. Reports like these can be accessed from dozens of vendors if you are willing to wait a few hours for the report to complete. But how do you do this for 30 trillion links every single day?

Since the launch of Link Explorer and my residency here at Moz, I have had the luxury of far less filtered data, giving me a far deeper, clearer picture of the tools available to backlink index maintainers to identify and counter manipulation. While I in no way intend to say that all manipulation can be detected, I want to outline just some of the myriad surprising methodologies to detect spam.

The general methodology

You don’t need to be a data scientist or a math nerd to understand this simple practice for identifying link spam. While there certainly is a great deal of math used in the execution of measuring, testing, and building practical models, the general gist is plainly understandable.

The first step is to get a good random sample of links from the web, which you can read about here. But let’s assume you have already finished that step. Then, for any property of those random links (DA, anchor text, etc.), you figure out what is normal or expected. Finally, you look for outliers and see if those correspond with something important – like sites that are manipulating the link graph, or sites that are exceptionally good. Let’s start with an easy example, link decay.

Link decay and link spam

Link decay is the natural occurrence of links either dropping off the web or changing URLs. For example, if you get links after you send out a press release, you would expect some of those links to eventually disappear as the pages are archived or removed for being old. And, if you were to get a link from a blog post, you might expect to have a homepage link on the blog until that post is pushed to the second or third page by new posts.

But what if you bought your links? What if you own a large number of domains and all the sites link to each other? What if you use a PBN? These links tend not to decay. Exercising control over your inbound links often means that you keep them from ever decaying. Thus, we can create a simple hypothesis:

Hypothesis: The link decay rate of sites manipulating the link graph will differ from sites with natural link profiles.

The methodology for testing this hypothesis is just as we discussed before. We first figure out what is natural. What does a random site’s link decay rate look like? Well, we simply get a bunch of sites and record how fast links are deleted (we visit a page and see a link is gone) vs. their total number of links. We then can look for anomalies.

In this case of anomaly hunting, I’m going to make it really easy. No statistics, no math, just a quick look at what pops up when we first sort by Lowest Decay Rate and then sort by Highest Domain Authority to see who is at the tail-end of the spectrum.

spreadsheet of sites with high deleted link ratios

Success! Every example we see of a good DA score but 0 link decay appears to be powered by a link network of some sort. This is the Aha! moment of data science that is so fun. What is particularly interesting is we find spam on both ends of the distribution — that is to say, sites that have 0 decay or near 100% decay rates both tend to be spammy. The first type tends to be part of a link network, the second part tends to spam their backlinks to sites others are spamming, so their links quickly shuffle off to other pages.

Of course, now we do the hard work of building a model that actually takes this into account and accurately reduces Domain Authority relative to the severity of the link spam. But you might be asking…

These sites don’t rank in Google — why do they have decent DAs in the first place?

Well, this is a common problem with training sets. DA is trained on sites that rank in Google so that we can figure out who will rank above who. However, historically, we haven’t (and no one to my knowledge in our industry has) taken into account random URLs that don’t rank at all. This is something we’re solving for in the new DA model set to launch in early March, so stay tuned, as this represents a major improvement on the way we calculate DA!

Spam Score distribution and link spam

One of the most exciting new additions to the upcoming Domain Authority 2.0 is the use of our Spam Score. Moz’s Spam Score is a link-blind (we don’t use links at all) metric that predicts the likelihood a domain will be indexed in Google. The higher the score, the worse the site.

Now, we could just ignore any links from sites with Spam Scores over 70 and call it a day, but it turns out there are fascinating patterns left behind by common link manipulation schemes waiting to be discovered by using this simple methodology of using a random sample of URLs to find out what a normal backlink profile looks like, and then see if there are anomalies in the way Spam Score is distributed among the backlinks to a site. Let me show you just one.

It turns out that acting natural is really hard to do. Even the best attempts often fall short, as did this particularly pernicious link spam network. This network had haunted me for 2 years because it included a directory of the top million sites, so if you were one of those sites, you could see anywhere from 200 to 600 followed links show up in your backlink profile. I called it “The Globe” network. It was easy to look at the network and see what they were doing, but could we spot it automatically so that we could devalue other networks like it in the future? When we looked at the link profile of sites included in the network, the Spam Score distribution lit up like a Christmas tree.

spreadsheet with distribution of spam scores

Most sites get the majority of their backlinks from low Spam Score domains and get fewer and fewer as the Spam Score of the domains go up. But this link network couldn’t hide because we were able to detect the sites in their network as having quality issues using Spam Score. If we relied only on ignoring the bad Spam Score links, we would have never discovered this issue. Instead, we found a great classifier for finding sites that are likely to be penalized by Google for bad link building practices.

DA distribution and link spam

We can find similar patterns among sites with the distribution of inbound Domain Authority. It’s common for businesses seeking to increase their rankings to set minimum quality standards on their outreach campaigns, often DA30 and above. An unfortunate outcome of this is that what remains are glaring examples of sites with manipulated link profiles.

Let me take a moment and be clear here. A manipulated link profile is not necessarily against Google’s guidelines. If you do targeted PR outreach, it is reasonable to expect that such a distribution might occur without any attempt to manipulate the graph. However, the real question is whether Google wants sites that perform such outreach to perform better. If not, this glaring example of link manipulation is pretty easy for Google to dampen, if not ignore altogether.

spreadsheet with distribution of domain authorityA normal link graph for a site that is not targeting high link equity domains will have the majority of their links coming from DA0–10 sites, slightly fewer for DA10–20, and so on and so forth until there are almost no links from DA90+. This makes sense, as the web has far more low DA sites than high. But all the sites above have abnormal link distributions, which make it easy to detect and correct — at scale — link value.

Now, I want to be clear: these are not necessarily examples of violating Google’s guidelines. However, they are manipulations of the link graph. It’s up to you to determine whether you believe Google takes the time to differentiate between how the outreach was conducted that resulted in the abnormal link distribution.

What doesn’t work

For every type of link manipulation detection method we discover, we scrap dozens more. Some of these are actually quite surprising. Let me write about just one of the many.

The first surprising example was the ratio of nofollow to follow links. It seems pretty straightforward that comment, forum, and other types of spammers would end up accumulating lots of nofollowed links, thereby leaving a pattern that is easy to discern. Well, it turns out this is not true at all.

The ratio of nofollow to follow links turns out to be a poor indicator, as popular sites like facebook.com often have a higher ratio than even pure comment spammers. This is likely due to the use of widgets and beacons and the legitimate usage of popular sites like facebook.com in comments across the web. Of course, this isn’t always the case. There are some sites with 100% nofollow links and a high number of root linking domains. These anomalies, like “Comment Spammer 1,” can be detected quite easily, but as a general measurement the ratio does not serve as a good classifier for spam or ham.

So what’s next?

Moz is continually traversing the the link graph looking for ways to improve Domain Authority using everything from basic linear algebra to complex neural networks. The goal in mind is simple: We want to make the best Domain Authority metric ever. We want a metric which users can trust in the long run to root out spam just like Google (and help you determine when you or your competitors are pushing the limits) while at the same time maintaining or improving correlations with rankings. Of course, we have no expectation of rooting out all spam — no one can do that. But we can do a better job. Led by the incomparable Neil Martinsen-Burrell, our metric will stand alone in the industry as the canonical method for measuring the likelihood a site will rank in Google.


We’re launching Domain Authority 2.0 on March 5th! Check out our helpful resources here, or sign up for our webinar this Thursday, February 21st for more info on how to communicate changes like this to clients and stakeholders:

Save my spot!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Source: https://moz.com/blog/domain-authority-and-spam-detection

UK parliament calls for antitrust, data abuse probe of Facebook

A final report by a British parliamentary committee which spent months last year investigating online political disinformation makes very uncomfortable reading for Facebook — with the company singled out for “disingenuous” and “bad faith” responses to democratic concerns about the misuse of people’s data.

In the report, published today, the committee has also called for Facebook’s use of user data to be investigated by the UK’s data watchdog.

In an evidence session to the committee late last year, the Information Commissioner’s Office (ICO) suggested Facebook needs to change its business model — warning the company risks burning user trust for good.

Last summer the ICO also called for an ethical pause of social media ads for election campaigning, warning of the risk of developing “a system of voter surveillance by default”.

Interrogating the distribution of ‘fake news’

The UK parliamentary enquiry looked into both Facebook’s own use of personal data to further its business interests, such as by providing access to user data to developers and advertisers in order to increase revenue and/or usage; and examined what Facebook claimed as ‘abuse’ of its platform by the disgraced (and now defunct) political data company Cambridge Analytica — which in 2014 paid a developer with access to Facebook’s developer platform to extract information on millions of Facebook users in build voter profiles to try to influence elections.

The committee’s conclusion about Facebook’s business is a damning one with the company accused of operating a business model that’s predicated on selling abusive access to people’s data.

Far from Facebook acting against “sketchy” or “abusive” apps, of which action it has produced no evidence at all, it, in fact, worked with such apps as an intrinsic part of its business model,” the committee argues. This explains why it recruited the people who created them, such as Joseph Chancellor [the co-founder of GSR, the developer which sold Facebook user data to Cambridge Analytica]. Nothing in Facebook’s actions supports the statements of Mark Zuckerberg who, we believe, lapsed into “PR crisis mode”, when its real business model was exposed.

“This is just one example of the bad faith which we believe justifies governments holding a business such as Facebook at arms’ length. It seems clear to us that Facebook acts only when serious breaches become public. This is what happened in 2015 and 2018.”

“We consider that data transfer for value is Facebook’s business model and that Mark Zuckerberg’s statement that ‘we’ve never sold anyone’s data” is simply untrue’,” the committee also concludes.

We’ve reached out to Facebook for comment on the committee’s report.

Last fall the company was issued the maximum possible fine under relevant UK data protection law for failing to safeguard user data from Cambridge Analytica saga. Although Facebook is appealing the ICO’s penalty, claiming there’s no evidence UK users’ data got misused.

During the course of a multi-month enquiry last year investigating disinformation and fake news, the Digital, Culture, Media and Sport (DCMS) committee heard from 73 witnesses in 23 oral evidence sessions, as well as taking in 170 written submissions. In all the committee says it posed more than 4,350 questions.

Its wide-ranging, 110-page report makes detailed observations on a number of technologies and business practices across the social media, adtech and strategic communications space, and culminates in a long list of recommendations for policymakers and regulators — reiterating its call for tech platforms to be made legally liable for content.

Among the report’s main recommendations are:

  • clear legal liabilities for tech companies to act against “harmful or illegal content”, with the committee calling for a compulsory Code of Ethics overseen by a independent regulatory with statutory powers to obtain information from companies; instigate legal proceedings and issue (“large”) fines for non-compliance
  • privacy law protections to cover inferred data so that models used to make inferences about individuals are clearly regulated under UK data protection rules
  • a levy on tech companies operating in the UK to support enhanced regulation of such platforms
  • a call for the ICO to investigate Facebook’s platform practices and use of user data
  • a call for the Competition Markets Authority to comprehensively “audit” the online advertising ecosystem, and also to investigate whether Facebook specifically has engaged in anti-competitive practices
  • changes to UK election law to take account of digital campaigning, including “absolute transparency of online political campaigning” — including “full disclosure of the targeting used” — and more powers for the Electoral Commission
  • a call for a government review of covert digital influence campaigns by foreign actors (plus a review of legislation in the area to consider if it’s adequate) — including the committee urging the government to launch independent investigations of recent past elections to examine “foreign influence, disinformation, funding, voter manipulation, and the sharing of data, so that appropriate changes to the law can be made and lessons can be learnt for future elections and referenda”
  • a requirement on social media platforms to develop tools to distinguish between “quality journalism” and low quality content sources, and/or work with existing providers to make such services available to users

Among the areas the committee’s report covers off with detailed commentary are data use and targeting; advertising and political campaigning — including foreign influence; and digital literacy.

It argues that regulation is urgently needed to restore democratic accountability and “make sure the people stay in charge of the machines”.

Ministers are due to produce a White Paper on social media safety regulation this winter and the committee writes that it hopes its recommendations will inform government thinking.

“Much has been said about the coarsening of public debate, but when these factors are brought to bear directly in election campaigns then the very fabric of our democracy is threatened,” the committee writes. “This situation is unlikely to change. What does need to change is the enforcement of greater transparency in the digital sphere, to ensure that we know the source of what we are reading, who has paid for it and why the information has been sent to us. We need to understand how the big tech companies work and what happens to our data.”

The report calls for tech companies to be regulated as a new category “not necessarily either a ‘platform’ or a ‘publisher”, but which legally tightens their liability for harmful content published on their platforms.

Last month another UK parliamentary committee also urged the government to place a legal ‘duty of care’ on platforms to protect users under the age of 18 — and the government said then that it has not ruled out doing so.

“Digital gangsters”

Competition concerns are also raised several times by the committee.

“Companies like Facebook should not be allowed to behave like ‘digital gangsters’ in the online world, considering themselves to be ahead of and beyond the law,” the DCMS committee writes, going on to urge the government to investigate whether Facebook specifically has been involved in any anti-competitive practices and conduct a review of its business practices towards other developers “to decide whether Facebook is unfairly using its dominant market position in social media to decide which businesses should succeed or fail”. 

“The big tech companies must not be allowed to expand exponentially, without constraint or proper regulatory oversight,” it adds.

The committee suggests existing legal tools are up to the task of reining in platform power, citing privacy laws, data protection legislation, antitrust and competition law — and calling for a “comprehensive audit” of the social media advertising market by the UK’s Competition and Markets Authority, and a specific antitrust probe of Facebook’s business practices.

“If companies become monopolies they can be broken up, in whatever sector,” the committee points out. “Facebook’s handling of personal data, and its use for political campaigns, are prime and legitimate areas for inspection by regulators, and it should not be able to evade all editorial responsibility for the content shared by its users across its platforms.”

The social networking giant was the recipient of many awkward queries during the course of the committee’s enquiry but it refused repeated requests for its founder Mark Zuckerberg to testify — sending a number of lesser staffers in his stead.

That decision continues to be seized upon by the committee as evidence of a lack of democratic accountability. It also accuses Facebook of having an intentionally “opaque management structure”.

“By choosing not to appear before the Committee and by choosing not to respond personally to any of our invitations, Mark Zuckerberg has shown contempt towards both the UK Parliament and the ‘International Grand Committee’, involving members from nine legislatures from around the world,” the committee writes.

“The management structure of Facebook is opaque to those outside the business and this seemed to be designed to conceal knowledge of and responsibility for specific decisions. Facebook used the strategy of sending witnesses who they said were the most appropriate representatives, yet had not been properly briefed on crucial issues, and could not or chose not to answer many of our questions. They then promised to follow up with letters, which—unsurprisingly—failed to address all of our questions. We are left in no doubt that this strategy was deliberate.”

It doubles down on the accusation that Facebook sought to deliberately mislead its enquiry — pointing to incorrect and/or inadequate responses from staffers who did testify.

“We are left with the impression that either [policy VP] Simon Milner and [CTO] Mike Schroepfer deliberately misled the Committee or they were deliberately not briefed by senior executives at Facebook about the extent of Russian interference in foreign elections,” it suggests.

In an unusual move late last year the committee used rare parliamentary powers to seize a cache of documents related to an active US lawsuit against Facebook filed by a developer called Six4Three.

The cache of documents is referenced extensively in the final report, and appears to have fuelled antitrust concerns, with the committee arguing that the evidence obtained from the internal company documents “indicates that Facebook was willing to override its users’ privacy settings in order to transfer data to some app developers, to charge high prices in advertising to some developers, for the exchange of that data, and to starve some developers… of that data, thereby causing them to lose their business”.

“It seems clear that Facebook was, at the very least, in violation of its Federal Trade Commission [privacy] settlement,” the committee also argues, citing evidence from the former chief technologist of the FTC, Ashkan Soltani .

On Soltani’s evidence, it writes:

Ashkan Soltani rejected [Facebook’s] claim, saying that up until 2012, platform controls did not exist, and privacy controls did not apply to apps. So even if a user set their profile to private, installed apps would still be able to access information. After 2012, Facebook added platform controls and made privacy controls applicable to apps. However, there were ‘whitelisted’ apps that could still access user data without permission and which, according to Ashkan Soltani, could access friends’ data for nearly a decade before that time. Apps were able to circumvent users’ privacy of platform settings and access friends’ information, even when the user disabled the Platform. This was an example of Facebook’s business model driving privacy violations.

While Facebook is singled out for the most eviscerating criticism in the report (and targeted for specific investigations), the committee’s long list of recommendations are addressed at social media businesses and online advertisers generally.

It also calls for far more transparency from platforms, writing that: “Social media companies need to be more transparent about their own sites, and how they work. Rather than hiding behind complex agreements, they should be informing users of how their sites work, including curation functions and the way in which algorithms are used to prioritise certain stories, news and videos, depending on each user’s profile. The more people know how the sites work, and how the sites use individuals’ data, the more informed we shall all be, which in turn will make choices about the use and privacy of sites easier to make.”

The committee also urges a raft of updates to UK election law — branding it “not fit for purpose” in the digital era.

Its interim report, published last summer, made many of the same recommendations.

Russian interest

But despite pressing the government for urgent action there was only a cool response from ministers then, with the government remaining tied up trying to shape a response to the 2016 Brexit vote which split the country (with social media’s election-law-deforming help). Instead it opted for a ‘wait and see‘ approach.

The government accepted just three of the preliminary report’s forty-two recommendations outright, and fully rejected four.

Nonetheless, the committee has doubled down on its preliminary conclusions, reiterating earlier recommendations and pushing the government once again to act.

It cites fresh evidence, including from additional testimony, as well as pointing to other reports (such as the recently published Cairncross Review) which it argues back up some of the conclusions reached. 

“Our inquiry over the last year has identified three big threats to our society. The challenge for the year ahead is to start to fix them; we cannot delay any longer,” writes Damian Collins MP and chair of the DCMS Committee, in a statement. “Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms we use every day. Much of this is directed from agencies working in foreign countries, including Russia.

“The big tech companies are failing in the duty of care they owe to their users to act against harmful content, and to respect their data privacy rights. Companies like Facebook exercise massive market power which enables them to make money by bullying the smaller technology companies and developers who rely on this platform to reach their customers.”

“These are issues that the major tech companies are well aware of, yet continually fail to address. The guiding principle of the ‘move fast and break things’ culture often seems to be that it is better to apologise than ask permission. We need a radical shift in the balance of power between the platforms and the people,” he added.

“The age of inadequate self-regulation must come to an end. The rights of the citizen need to be established in statute, by requiring the tech companies to adhere to a code of conduct written into law by Parliament, and overseen by an independent regulator.”

The committee says it expects the government to respond to its recommendations within two months — noting rather dryly: “We hope that this will be much more comprehensive, practical, and constructive than their response to the Interim Report, published in October 2018. Several of our recommendations were not substantively answered and there is now an urgent need for the Government to respond to them.”

It also makes a point of including an analysis of Internet traffic to the government’s own response
Source: https://techcrunch.com/2019/02/17/uk-parliament-calls-for-antitrust-data-abuse-probe-of-facebook/