UK spies using social media data for mass surveillance

 Privacy rights group Privacy International says it has obtained evidence for the first time that UK spy agencies are collecting social media information on potentially millions of people.  Read More
Source: https://techcrunch.com/2017/10/17/uk-spies-using-social-media-data-for-mass-surveillance/?ncid=rss

Advertisements

Twitter introduces a new video-centric ad format

Twitter video website card While there’s a larger debate swirling around Twitter’s problems with abuse and harassment, the business side of the business is still chugging along. Today, the company is unveiling a new ad format called the Video Website Card, which it describes as ” a creative format that combines the power of video with the ability to drive users back to a site to learn more or take… Read More
Source: https://techcrunch.com/2017/10/17/twitter-video-website-card/?ncid=rss

Snap and NBCUniversal team up on Snapchat scripted shows

 Snapchat parent co. Snap is working with NBCUniversal on its first scripted series, with an initial effort led by the Duplass Brothers – Mark and Jay, to those in the know. The duo will create scripted programming for Snapchat through Donut, their own creative production venture. It’s maybe a weird choice for Snap and their youthful audience, but it’s a team with a proven… Read More
Source: https://techcrunch.com/2017/10/17/snap-and-nbcuniversal-team-up-on-snapchat-scripted-shows/?ncid=rss

Facebook tests a resume “work histories” feature to boost recruitment efforts

 As LinkedIn ads in video and other features to look a little more like Facebook, Facebook continues to take on LinkedIn in the world of social recruitment services. In the latest development, Facebook is testing a feature to let users create resumes — which Facebook calls a “work histories” feature — and share them privately on the site as part of their job hunt.… Read More
Source: https://techcrunch.com/2017/10/17/facebook-takes-another-bite-of-linkedin/?ncid=rss

Silenced by ‘free speech’

 There’s a fundamental incongruency between being pro ‘free speech’ and operating a global social network for civil public discussion. Twitter is struggling with it. Facebook is struggling with it, too. And it can’t be solved by a little more transparency or by hoping average citizens will do the right thing. The principle of free speech on which the United States… Read More
Source: https://techcrunch.com/2017/10/16/scaling-civility/?ncid=rss

Google Shares Details About the Technology Behind Googlebot

Posted by goralewicz

Crawling and indexing has been a hot topic over the last few years. As soon as Google launched Google Panda, people rushed to their server logs and crawling stats and began fixing their index bloat. All those problems didn’t exist in the “SEO = backlinks” era from a few years ago. With this exponential growth of technical SEO, we need to get more and more technical. That being said, we still don’t know how exactly Google crawls our websites. Many SEOs still can’t tell the difference between crawling and indexing.

The biggest problem, though, is that when we want to troubleshoot indexing problems, the only tool in our arsenal is Google Search Console and the Fetch and Render tool. Once your website includes more than HTML and CSS, there’s a lot of guesswork into how your content will be indexed by Google. This approach is risky, expensive, and can fail multiple times. Even when you discover the pieces of your website that weren’t indexed properly, it’s extremely difficult to get to the bottom of the problem and find the fragments of code responsible for the indexing problems.

Fortunately, this is about to change. Recently, Ilya Grigorik from Google shared one of the most valuable insights into how crawlers work:

Interestingly, this tweet didn’t get nearly as much attention as I would expect.

So what does Ilya’s revelation in this tweet mean for SEOs?

Knowing that Chrome 41 is the technology behind the Web Rendering Service is a game-changer. Before this announcement, our only solution was to use Fetch and Render in Google Search Console to see our page rendered by the Website Rendering Service (WRS). This means we can troubleshoot technical problems that would otherwise have required experimenting and creating staging environments. Now, all you need to do is download and install Chrome 41 to see how your website loads in the browser. That’s it.

You can check the features and capabilities that Chrome 41 supports by visiting Caniuse.com or Chromestatus.com (Googlebot should support similar features). These two websites make a developer’s life much easier.

Even though we don’t know exactly which version Ilya had in mind, we can find Chrome’s version used by the WRS by looking at the server logs. It’s Chrome 41.0.2272.118.

It will be updated sometime in the future

Chrome 41 was created two years ago (in 2015), so it’s far removed from the current version of the browser. However, as Ilya Grigorik said, an update is coming:

I was lucky enough to get Ilya Grigorik to read this article before it was published, and he provided a ton of valuable feedback on this topic. He mentioned that they are hoping to have the WRS updated by 2018. Fingers crossed!

Google uses Chrome 41 for rendering. What does that mean?

We now have some interesting information about how Google renders websites. But what does that mean, practically, for site developers and their clients? Does this mean we can now ignore server-side rendering and deploy client-rendered, JavaScript-rich websites?

Not so fast. Here is what Ilya Grigorik had to say in response to this question:

We now know WRS’ capabilities for rendering JavaScript and how to debug them. However, remember that not all crawlers support Javascript crawling, etc. Also, as of today, JavaScript crawling is only supported by Google and Ask (Ask is most likely powered by Google). Even if you don’t care about social media or search engines other than Google, one more thing to remember is that even with Chrome 41, not all JavaScript frameworks can be indexed by Google (read more about JavaScript frameworks crawling and indexing). This lets us troubleshoot and better diagnose problems.

Don’t get your hopes up

All that said, there are a few reasons to keep your excitement at bay.

Remember that version 41 of Chrome is over two years old. It may not work very well with modern JavaScript frameworks. To test it yourself, open http://jsseo.expert/polymer/ using Chrome 41, and then open it in any up-to-date browser you are using.

The page in Chrome 41 looks like this:

The content parsed by Polymer is invisible (meaning it wasn’t processed correctly). This is also a perfect example for troubleshooting potential indexing issues. The problem you’re seeing above can be solved if diagnosed properly. Let me quote Ilya:

“If you look at the raised Javascript error under the hood, the test page is throwing an error due to unsupported (in M41) ES6 syntax. You can test this yourself in M41, or use the debug snippet we provided in the blog post to log the error into the DOM to see it.”

I believe this is another powerful tool for web developers willing to make their JavaScript websites indexable. We will definitely expand our experiment and work with Ilya’s feedback.

The Fetch and Render tool is the Chrome v. 41 preview

There’s another interesting thing about Chrome 41. Google Search Console’s Fetch and Render tool is simply the Chrome 41 preview. The righthand-side view (“This is how a visitor to your website would have seen the page”) is generated by the Google Search Console bot, which is… Chrome 41.0.2272.118 (see screenshot below).

Zoom in here

There’s evidence that both Googlebot and Google Search Console Bot render pages using Chrome 41. Still, we don’t exactly know what the differences between them are. One noticeable difference is that the Google Search Console bot doesn’t respect the robots.txt file. There may be more, but for the time being, we’re not able to point them out.

Chrome 41 vs Fetch as Google: A word of caution

Chrome 41 is a great tool for debugging Googlebot. However, sometimes (not often) there’s a situation in which Chrome 41 renders a page properly, but the screenshots from Google Fetch and Render suggest that Google can’t handle the page. It could be caused by CSS animations and transitions, Googlebot timeouts, or the usage of features that Googlebot doesn’t support. Let me show you an example.

Chrome 41 preview:

Image blurred for privacy

The above page has quite a lot of content and images, but it looks completely different in Google Search Console.

Google Search Console preview for the same URL:

As you can see, Google Search Console’s preview of this URL is completely different than what you saw on the previous screenshot (Chrome 41). All the content is gone and all we can see is the search bar.

From what we noticed, Google Search Console renders CSS a little bit different than Chrome 41. This doesn’t happen often, but as with most tools, we need to double check whenever possible.

This leads us to a question…

What features are supported by Googlebot and WRS?

According to the Rendering on Google Search guide:

  • Googlebot doesn’t support IndexedDB, WebSQL, and WebGL.
  • HTTP cookies and local storage, as well as session storage, are cleared between page loads.
  • All features requiring user permissions (like Notifications API, clipboard, push, device-info) are disabled.
  • Google can’t index 3D and VR content.
  • Googlebot only supports HTTP/1.1 crawling.

The last point is really interesting. Despite statements from Google over the last 2 years, Google still only crawls using HTTP/1.1.

No HTTP/2 support (still)

We’ve mostly been covering how Googlebot uses Chrome, but there’s another recent discovery to keep in mind.

There is still no support for HTTP/2 for Googlebot.

Since it’s now clear that Googlebot doesn’t support HTTP/2, this means that if your website supports HTTP/2, you can’t drop HTTP 1.1 optimization. Googlebot can crawl only using HTTP/1.1.

There were several announcements recently regarding Google’s HTTP/2 support. To read more about it, check out my HTTP/2 experiment here on the Moz Blog.

Via https://developers.google.com/search/docs/guides/r…

Googlebot’s future

Rumor has it that Chrome 59’s headless mode was created for Googlebot, or at least that it was discussed during the design process. It’s hard to say if any of this chatter is true, but if it is, it means that to some extent, Googlebot will “see” the website in the same way as regular Internet users.

This would definitely make everything simpler for developers who wouldn’t have to worry about Googlebot’s ability to crawl even the most complex websites.

Chrome 41 vs. Googlebot’s crawling efficiency

Chrome 41 is a powerful tool for debugging JavaScript crawling and indexing. However, it’s crucial not to jump on the hype train here and start launching websites that “pass the Chrome 41 test.”

Even if Googlebot can “see” our website, there are many other factors that will affect your site’s crawling efficiency. As an example, we already have proof showing that Googlebot can crawl and index JavaScript and many JavaScript frameworks. It doesn’t mean that JavaScript is great for SEO. I gathered significant evidence showing that JavaScript pages aren’t crawled even half as effectively as HTML-based pages.

In summary

Ilya Grigorik’s tweet sheds more light on how Google crawls pages and, thanks to that, we don’t have to build experiments for every feature we’re testing — we can use Chrome 41 for debugging instead. This simple step will definitely save a lot of websites from indexing problems, like when Hulu.com’s JavaScript SEO backfired.

It’s safe to assume that Chrome 41 will now be a part of every SEO’s toolset.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Source: https://moz.com/blog/google-shares-details-googlebot

Does Googlebot Support HTTP/2? Challenging Google’s Indexing Claims – An Experiment

Posted by goralewicz

I was recently challenged with a question from a client, Robert, who runs a small PR firm and needed to optimize a client’s website. His question inspired me to run a small experiment in HTTP protocols. So what was Robert’s question? He asked…

Can Googlebot crawl using HTTP/2 protocols?

You may be asking yourself, why should I care about Robert and his HTTP protocols?

As a refresher, HTTP protocols are the basic set of standards allowing the World Wide Web to exchange information. They are the reason a web browser can display data stored on another server. The first was initiated back in 1989, which means, just like everything else, HTTP protocols are getting outdated. HTTP/2 is one of the latest versions of HTTP protocol to be created to replace these aging versions.

So, back to our question: why do you, as an SEO, care to know more about HTTP protocols? The short answer is that none of your SEO efforts matter or can even be done without a basic understanding of HTTP protocol. Robert knew that if his site wasn’t indexing correctly, his client would miss out on valuable web traffic from searches.

The hype around HTTP/2

HTTP/1.1 is a 17-year-old protocol (HTTP 1.0 is 21 years old). Both HTTP 1.0 and 1.1 have limitations, mostly related to performance. When HTTP/1.1 was getting too slow and out of date, Google introduced SPDY in 2009, which was the basis for HTTP/2. Side note: Starting from Chrome 53, Google decided to stop supporting SPDY in favor of HTTP/2.

HTTP/2 was a long-awaited protocol. Its main goal is to improve a website’s performance. It’s currently used by 17% of websites (as of September 2017). Adoption rate is growing rapidly, as only 10% of websites were using HTTP/2 in January 2017. You can see the adoption rate charts here. HTTP/2 is getting more and more popular, and is widely supported by modern browsers (like Chrome or Firefox) and web servers (including Apache, Nginx, and IIS).

Its key advantages are:

  • Multiplexing: The ability to send multiple requests through a single TCP connection.
  • Server push: When a client requires some resource (let’s say, an HTML document), a server can push CSS and JS files to a client cache. It reduces network latency and round-trips.
  • One connection per origin: With HTTP/2, only one connection is needed to load the website.
  • Stream prioritization: Requests (streams) are assigned a priority from 1 to 256 to deliver higher-priority resources faster.
  • Binary framing layer: HTTP/2 is easier to parse (for both the server and user).
  • Header compression: This feature reduces overhead from plain text in HTTP/1.1 and improves performance.

For more information, I highly recommend reading “Introduction to HTTP/2” by Surma and Ilya Grigorik.

All these benefits suggest pushing for HTTP/2 support as soon as possible. However, my experience with technical SEO has taught me to double-check and experiment with solutions that might affect our SEO efforts.

So the question is: Does Googlebot support HTTP/2?

Google’s promises

HTTP/2 represents a promised land, the technical SEO oasis everyone was searching for. By now, many websites have already added HTTP/2 support, and developers don’t want to optimize for HTTP/1.1 anymore. Before I could answer Robert’s question, I needed to know whether or not Googlebot supported HTTP/2-only crawling.

I was not alone in my query. This is a topic which comes up often on Twitter, Google Hangouts, and other such forums. And like Robert, I had clients pressing me for answers. The experiment needed to happen. Below I’ll lay out exactly how we arrived at our answer, but here’s the spoiler: it doesn’t. Google doesn’t crawl using the HTTP/2 protocol. If your website uses HTTP/2, you need to make sure you continue to optimize the HTTP/1.1 version for crawling purposes.

The question

It all started with a Google Hangouts in November 2015.

When asked about HTTP/2 support, John Mueller mentioned that HTTP/2-only crawling should be ready by early 2016, and he also mentioned that HTTP/2 would make it easier for Googlebot to crawl pages by bundling requests (images, JS, and CSS could be downloaded with a single bundled request).

“At the moment, Google doesn’t support HTTP/2-only crawling (…) We are working on that, I suspect it will be ready by the end of this year (2015) or early next year (2016) (…) One of the big advantages of HTTP/2 is that you can bundle requests, so if you are looking at a page and it has a bunch of embedded images, CSS, JavaScript files, theoretically you can make one request for all of those files and get everything together. So that would make it a little bit easier to crawl pages while we are rendering them for example.”

Soon after, Twitter user Kai Spriestersbach also asked about HTTP/2 support:

His clients started dropping HTTP/1.1 connections optimization, just like most developers deploying HTTP/2, which was at the time supported by all major browsers.

After a few quiet months, Google Webmasters reignited the conversation, tweeting that Google won’t hold you back if you’re setting up for HTTP/2. At this time, however, we still had no definitive word on HTTP/2-only crawling. Just because it won’t hold you back doesn’t mean it can handle it — which is why I decided to test the hypothesis.

The experiment

For months as I was following this online debate, I still received questions from our clients who no longer wanted want to spend money on HTTP/1.1 optimization. Thus, I decided to create a very simple (and bold) experiment.

I decided to disable HTTP/1.1 on my own website (https://goralewicz.com) and make it HTTP/2 only. I disabled HTTP/1.1 from March 7th until March 13th.

If you’re going to get bad news, at the very least it should come quickly. I didn’t have to wait long to see if my experiment “took.” Very shortly after disabling HTTP/1.1, I couldn’t fetch and render my website in Google Search Console; I was getting an error every time.

My website is fairly small, but I could clearly see that the crawling stats decreased after disabling HTTP/1.1. Google was no longer visiting my site.

While I could have kept going, I stopped the experiment after my website was partially de-indexed due to “Access Denied” errors.

The results

I didn’t need any more information; the proof was right there. Googlebot wasn’t supporting HTTP/2-only crawling. Should you choose to duplicate this at home with our own site, you’ll be happy to know that my site recovered very quickly.

I finally had Robert’s answer, but felt others may benefit from it as well. A few weeks after finishing my experiment, I decided to ask John about HTTP/2 crawling on Twitter and see what he had to say.

(I love that he responds.)

Knowing the results of my experiment, I have to agree with John: disabling HTTP/1 was a bad idea. However, I was seeing other developers discontinuing optimization for HTTP/1, which is why I wanted to test HTTP/2 on its own.

For those looking to run their own experiment, there are two ways of negotiating a HTTP/2 connection:

1. Over HTTP (unsecure) – Make an HTTP/1.1 request that includes an Upgrade header. This seems to be the method to which John Mueller was referring. However, it doesn’t apply to my website (because it’s served via HTTPS). What is more, this is an old-fashioned way of negotiating, not supported by modern browsers. Below is a screenshot from Caniuse.com:

2. Over HTTPS (secure) – Connection is negotiated via the ALPN protocol (HTTP/1.1 is not involved in this process). This method is preferred and widely supported by modern browsers and servers.

A recent announcement: The saga continues

Googlebot doesn’t make HTTP/2 requests

Fortunately, Ilya Grigorik, a web performance engineer at Google, let everyone peek behind the curtains at how Googlebot is crawling websites and the technology behind it:

If that wasn’t enough, Googlebot doesn’t support the WebSocket protocol. That means your server can’t send resources to Googlebot before they are requested. Supporting it wouldn’t reduce network latency and round-trips; it would simply slow everything down. Modern browsers offer many ways of loading content, including WebRTC, WebSockets, loading local content from drive, etc. However, Googlebot supports only HTTP/FTP, with or without Transport Layer Security (TLS).

Googlebot supports SPDY

During my research and after John Mueller’s feedback, I decided to consult an HTTP/2 expert. I contacted Peter Nikolow of Mobilio, and asked him to see if there were anything we could do to find the final answer regarding Googlebot’s HTTP/2 support. Not only did he provide us with help, Peter even created an experiment for us to use. Its results are pretty straightforward: Googlebot does support the SPDY protocol and Next Protocol Navigation (NPN). And thus, it can’t support HTTP/2.

Below is Peter’s response:


I performed an experiment that shows Googlebot uses SPDY protocol. Because it supports SPDY + NPN, it cannot support HTTP/2. There are many cons to continued support of SPDY:

    1. This protocol is vulnerable
    2. Google Chrome no longer supports SPDY in favor of HTTP/2
    3. Servers have been neglecting to support SPDY. Let’s examine the NGINX example: from version 1.95, they no longer support SPDY.
    4. Apache doesn’t support SPDY out of the box. You need to install mod_spdy, which is provided by Google.

To examine Googlebot and the protocols it uses, I took advantage of s_server, a tool that can debug TLS connections. I used Google Search Console Fetch and Render to send Googlebot to my website.

Here’s a screenshot from this tool showing that Googlebot is using Next Protocol Navigation (and therefore SPDY):

I’ll briefly explain how you can perform your own test. The first thing you should know is that you can’t use scripting languages (like PHP or Python) for debugging TLS handshakes. The reason for that is simple: these languages see HTTP-level data only. Instead, you should use special tools for debugging TLS handshakes, such as s_server.

Type in the console:

sudo openssl s_server -key key.pem -cert cert.pem -accept 443 -WWW -tlsextdebug -state -msg
sudo openssl s_server -key key.pem -cert cert.pem -accept 443 -www -tlsextdebug -state -msg

Please note the slight (but significant) difference between the “-WWW” and “-www” options in these commands. You can find more about their purpose in the s_server documentation.

Next, invite Googlebot to visit your site by entering the URL in Google Search Console Fetch and Render or in the Google mobile tester.

As I wrote above, there is no logical reason why Googlebot supports SPDY. This protocol is vulnerable; no modern browser supports it. Additionally, servers (including NGINX) neglect to support it. It’s just a matter of time until Googlebot will be able to crawl using HTTP/2. Just implement HTTP 1.1 + HTTP/2 support on your own server (your users will notice due to faster loading) and wait until Google is able to send requests using HTTP/2.


Summary

In November 2015, John Mueller said he expected Googlebot to crawl websites by sending HTTP/2 requests starting in early 2016. We don’t know why, as of October 2017, that hasn’t happened yet.

What we do know is that Googlebot doesn’t support HTTP/2. It still crawls by sending HTTP/ 1.1 requests. Both this experiment and the “Rendering on Google Search” page confirm it. (If you’d like to know more about the technology behind Googlebot, then you should check out what they recently shared.)

For now, it seems we have to accept the status quo. We recommended that Robert (and you readers as well) enable HTTP/2 on your websites for better performance, but continue optimizing for HTTP/ 1.1. Your visitors will notice and thank you.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Source: https://moz.com/blog/challenging-googlebot-experiment

Facebook acquires anonymous teen compliment app tbh, will let it run

 Facebook wants tbh to be its next Instagram. Today, Facebook announced it’s acquiring positivity-focused polling startup tbh and will allow it to operate somewhat independently as it’s done with Instagram and WhatsApp. tbh had scored 5 million downloads and 4 million daily active users in the past 9 weeks with its app that lets people anonymously answer kind-hearted… Read More
Source: https://techcrunch.com/2017/10/16/facebook-acquires-anonymous-teen-compliment-app-tbh-will-let-it-run/?ncid=rss

Silenced by “free speech”

 There’s a fundamental incongruency between being pro ‘free speech’ and operating a global social network for civil public discussion. Twitter is struggling with it. Facebook is struggling with it too. And it can’t be solved by a little more transparency or by hoping average citizens will do the right thing. The principle of free speech on which the United States was… Read More
Source: https://techcrunch.com/2017/10/16/scaling-civility/?ncid=rss

Measure Twice, Cut Once: The Reason Why All those Marketing Tactics Keep Failing

Tactics don’t necessarily fail because they’re bad.

They fail because of the context around them.

The customer segment was off. The timing was bad. Or the attempt was half-assed.

It all works. SEO works. Facebook ads work. Conversion optimization works.

But the degree to which they deliver depends wildly on other factors.

And the only way to ensure success is to get those things right, first, before jumping head-first into the tactics.

Here’s how to do the hard work, up-front, to make sure your next campaign goes off without a hitch.

Facebook ads “don’t work”

You can’t browse the interwebs without running into a new shiny hack. A brand new strategy or tactic to implement.

So you ditch the to-do list. You push off the important. You bend to the urgent. (Or at least, that which faintly resembles the urgent.)

You try the new hack. You invest hours that don’t exist and money that you don’t have.

You follow the “Launch Plan” from influencer XYZ to a T. Literally: Every. Single. Thing.

And then?

It falls flat. It works, but not enough. It produces, but not enough.

Seth Godin published Meatball Sundae in 2007. A decade ago.

Strange title, right? There’s a reason behind it:

“People treat the New Marketing like a kid with a twenty-dollar bill at an ice cream parlor. They keep wanting to add more stuff—more candy bits and sprinkles and cream and cherries. The dream is simple: ‘If we can just add enough of [today’s hot topping], everything will take care of itself.’”

Except, as you’re already all too familiar, that’s not how it works in the real world:

“Most of the time, despite all the hype, organizations fail when they try to use this scattershot approach. They fail to get buzz or traffic or noise or sales. Organizations don’t fail because the Web and the New Marketing don’t work. They fail because the Web and the New Marketing work only when applied to the right organization. New Media makes a promise to the consumer. If the organization is unable to keep that promise, then it fails.”

It’s the context, not the tactics.

We aren’t talking about 1960’s advertising. We can’t run ads in a vacuum and shape the public’s opinion.

There’s a lot of other things at stake. There’s a lot of other aspects to consider.

Facebook advertising is one of the best examples because it’s surprisingly complex and nuanced. You can’t just throw up a one-and-done campaign to see revenue pour in overnight.

That’s why it’s a waste of money according to popular opinion.

68 million people can’t be wrong… can they? (How many people voted last year again?)

Let’s click through a few of these to pull out the real gems:

“Facebook’s stock tanked after the IPO for one singular reason. Their advertising model does not work well. Most people who’ve advertised on Facebook, including myself, have been disappointed.”

Um. Ok.

“In that case, not only has Facebook and other digital technology killed ad creativity, it’s also killed ad effectiveness.”

I’m not even sure what that means.

Ok. Well, please, nobody tell Spearmint Love that Facebook ads don’t work. Because they just posted a 1,100% revenue increase last year using… Facebook ads.

Now, it wasn’t all rainbows, sunshine, and unicorns for them. They ran into problems, too.

It took six months for them to figure out one of the reasons their campaigns were stalling. It was simple and right in front of them the entire time.

Kids grow up.

Which means baby-related ads only work for so long with a particular cohort, before it’s time to refresh, update, and move along.

Again — the underlying issue was the market, the people, the life stage. Not the tactic.

They adapted. They went upstream. They followed customers as they naturally evolved.

So, no. “Boosting” posts endlessly doesn’t work. Buying likes doesn’t work, either. Not by themselves, obviously.

Likes, impressions, and fans don’t pay the bills. Leads and customers do.

That holds true regardless of which advertising medium we’re discussing: TV, radio, billboards, Google, Facebook, or otherwise.

You need a customer acquisition machine on Facebook. Simultaneous campaigns running in parallel. One building the attention and awareness for the next. Another nurturing those and presenting different enticing offers. Only after the foreplay can you get down to business.

Yet, that doesn’t happen. At least, not as often as it should. Which leads to… “It doesn’t work.”

This is far from the only scenario. This same issue pops up over and over again.

It even applies to the proposed Facebook solution you’re putting in place.

Custom audiences aren’t segmented

Facebook might not have the same level of user intent that AdWords does.

However, they do have custom audiences.

These dynamically-generated audiences can help you laser-target campaigns to skyrocket results. (Or, at least, push unprofitable ones past break-even.)

They allow you to run retargeting campaigns on steroids. You can overlay demographic and interest-based data with past user behavior, so you can accurately predict what someone wants next.

Custom audiences help increase your Relevancy Score, which in turn, lowers your Cost Per Click while also increasing your Click-Through Rate.

Image Source

Awesome, right?

So what could possibly be the problem?

Too often, your custom audiences aren’t custom enough.

Let’s talk about your business. How many products and services do you sell?

Now, how many of those do you sell to different customer segments or personas?

Imagine a simple matrix:

The possibilities might double or triple as you add each new variation. Exponentially.

It’s not my place to tell you that such a business model is too complicated. It is, however, to say that you’ve just made your ad campaigns infinitely more difficult.

Because this matrix doesn’t even take into account the funnel stage or intent level each audience has for each product. So we can add another layer of complexity here.

Let’s say you have a custom audience set up for past website visitors to your site. Fine.

However, in that one “custom audience” you’re lumping together all of these personas and products.

In other words, it’s segmented. Barely. A little bit. But not good enough.

The trick is to think through each possible variation and have your customers help you.

For example, the services page from Work the System segments you into two groups right off the bat:

Now, subsequent retargeting campaigns can use the right ad creative. The one that talks about the unique pain points of an online business (like remote workers) vs. that of the brick-and-mortar variety (like local hiring).

See? Everything is (or should be) different.

You can even do this on pricing pages.

For example, Credo names each plan for a different audience:

You segment product features based on personas. So why not your ad campaigns?

Agencies have more fixed expenses than freelancers. Therefore, their project minimums will be higher. Their goals are also in growing and managing a team vs. doing the work themselves.

They’re similar once again. But vastly different when you get down into the weeds.

MarketingExperiments.com worked with a medical company on a similar issue. Simply rewriting collateral pieces for a specific segment (as opposed to a nameless, faceless audience) increased CTR by 49.5%.

Image Source

Another trick you can try is including different ‘paths’ for each potential problem (and your service that lines up with it.)

So you send out a re-engagement email campaign with links to content pieces for each. Then you see who clicks what.

And then you sync your email data with custom audiences to add these people to the right destination.

Follow any of these recommendations (or better yet, use them together), and you’ll get custom audiences that are, in fact, custom.

It also means you’ll have about 3-4 times the number of custom audiences and campaigns running at any given time.

But it means you’ll have a better shot at success. And at getting Facebook ads to “work.”

All because you put in the proper work ahead of time.

Conversion tracking is off (or non-existent)

People think data is honest.

Unfortunately, it’s not. Data lies more than we care to admit.

Case in point: Conversions.

WTF is a “conversion” these days, anyway?

An email subscriber? A marketing-qualified lead? A sales-qualified lead? A one-off customer? A repeat customer? A high LTV customer?

Sometimes, it’s none of those things.

Years ago, I worked on a new client’s ad account.

The Conversion Rate column inside AdWords showed totals over 100%.

Now, obviously, I know that I’m dashing and brilliant and debonair. But not that much.

Because technically that’s impossible.

So we looked at it for only a few seconds to realize what was happening.

In almost every case, the Conversions total was equal to or more than the Clicks one.

That ain’t good. Here’s why.

Problem #1. It looks like we’re tracking clicks to the landing page as conversions.

Except, their goal wasn’t even a form fills opt-in. It was phone calls.

They anecdotally told me that phone numbers brought in better customers who also converted faster.

Ok, cool. Unfortunately, though, there was another issue.

Problem #2. No call tracking was set up, either.

So the phone rang. Constantly. Several times an hour. And yet PPC got no credit. Despite the fact that PPC probably drove an overwhelming number of the calls (based on the data we saw earlier.)

This client was primarily running classic bottom-of-the-funnel search ads. No display. So the peeps calling were converting. We just had no idea who was or why they were.

This creates a cascading effect of problems.

It meant that there was no historical conversion tracking data to use to draw insights. We literally had no idea which campaigns were converting the best or even which keywords outperformed others.

rabbit out of hat gif

But wait, because it’s about to get worse.

Problem #3. Aggregate numbers of leads to closed customers was being tracked in Excel.

In other words, X leads from Y campaign turned into customers this month.

Obviously, that’s not ideal. We couldn’t even track PPC leads accurately because of the issues above.

But from there, nobody could see that customer John Smith who converted on Wednesday spent $5,000 and came from Campaign XYZ.

Their “industry specialized CRM software” (read: sh!t) didn’t have an API.

A dude from the “industry-specific CRM” company gave me the following response: “We do not allow for any attempts to manipulate data in the database. Any attempts to do so would cause errors and result in data corruption.”

Which meant that even if we fixed all of these other problems, there was no way for us to pass data back and forth when PPC leads did, in fact, turn into paying customers.

So.

We’re blindly spending dizzying amounts of money. Daily.

And yet, somehow we’re supposed to come in and start driving new customers ASAP?

Without any idea of what’s currently happening, what happened previously, or even what we’re supposed to be optimizing in the first place?

via GIPHY

I’ll spare you the boring details. It involved months of going backward to fix various tracking problems (none of which we scoped or billed correctly beforehand #agencylife.)

We basically did everything imaginable.

Except our job.

We designed and created new landing pages so we could use form fields to track and painstakingly set up call tracking on every single landing page. Then we went so far as to create a process for their internal team to manually reconcile these data points each month and figure out how many customers were finally coming from PPC.

Then after we stopped working together, they undid all of the call tracking work we set up. Because: clients.

</end rant>

The point is, no tactic in the world can make up for this scenario.

Yes, SKAGs are good. Geo-targeting is good. Day-parting is fine, too.

But none of it matters if you can’t address the underlying issues. Otherwise, you’re just flying blind.

Not just a single goal inside Google Analytics. But many. Multiple. At different stages. For different personas. For different products/services.

Which always never happens.

First, create a good-old Google Analytics goal. You know, create a ‘thank you’ page, redirect opt-in users there, etc.