The ethics of internet culture: a conversation with Taylor Lorenz

Taylor Lorenz was in high demand this week. As a prolific journalist at The Atlantic and about-to-be member of Harvard’s prestigious Nieman Fellowship for journalism, that’s perhaps not surprising. Nor was this the first time she’s had a bit of a moment: Lorenz has already served as an in-house expert on social media and the internet for several major companies, while having written and edited for publications as diverse as The Daily Beast, The Hill, People, The Daily Mail, and Business Insider, all while remaining hip and in touch enough to currently serve as a kind of youth zeitgeist translator, on her beat as a technology writer for The Atlantic.

Lorenz is in fact publicly busy enough that she’s one of only two people I personally know to have openly ‘quit email,’ the other being my friend Russ, an 82 year-old retired engineer and MIT alum who literally spends all day, most days, working on a plan to reinvent the bicycle.

I wonder if any of Lorenz’s previous professional experiences, however, could have matched the weight of the events she encountered these past several days, when the nightmarish massacre in Christchurch, New Zealand brought together two of her greatest areas of expertise: political extremism (which she covered for The Hill), and internet culture. As her first Atlantic piece after the shootings said, the Christchurch killer’s manifesto was “designed to troll.” Indeed, his entire heinous act was a calculated effort to manipulate our current norms of Internet communication and connection, for fanatical ends.

Taylor Lorenz

Lorenz responded with characteristic insight, focusing on the ways in which the stylized insider subcultures the Internet supports can be used to confuse, distract, and mobilize millions of people for good and for truly evil ends:

Before people can even begin to grasp the nuances of today’s internet, they can be radicalized by it. Platforms such as YouTube and Facebook can send users barreling into fringe communities where extremist views are normalized and advanced. Because these communities have so successfully adopted irony as a cloaking device for promoting extremism, outsiders are left confused as to what is a real threat and what’s just trolling. The darker corners of the internet are so fragmented that even when they spawn a mass shooting, as in New Zealand, the shooter’s words can be nearly impossible to parse, even for those who are Extremely Online.”

Such insights are among the many reasons I was so grateful to be able to speak with Taylor Lorenz for this week’s installment of my TechCrunch series interrogating the ethics of technology.

As I’ve written in my previous interviews with author and inequality critic Anand Giridharadas, and with award-winning Google exec turned award-winning tech critic James Williams, I come to tech ethics from 25 years of studying religion. My personal approach to religion, however, has essentially always been that it plays a central role in human civilization not only or even primarily because of its theistic beliefs and “faith,” but because of its culture — its traditions, literature, rituals, history, and the content of its communities.

And because I don’t mind comparing technology to religion (not saying they are one and the same, but that there is something to be learned from the comparison), I’d argue that if we really want to understand the ethics of the technologies we are creating, particularly the Internet, we need to explore, as Taylor and I did in our conversation below, “the ethics of internet culture.”

What resulted was, like Lorenz’s work in general, at times whimsical, at times cool enough to fly right over my head, but at all times fascinating and important.

Editor’s Note: we ungated the first of 11 sections of this interview. Reading time: 22 minutes / 5,500 words.

Joking with the Pope

Greg Epstein: Taylor, thanks so much for speaking with me. As you know, I’m writing for TechCrunch about religion, ethics, and technology, and I recently discovered your work when you brought all those together in an unusual way. You subtweeted the Pope, and it went viral.

Taylor Lorenz: I know. [People] were freaking out.

Greg: What was that experience like?

Taylor: The Pope tweeted some insane tweet about how Mary, Jesus’ mother, was the first influencer. He tweeted it out, and everyone was spamming that tweet to me because I write so much about influencers, and I was just laughing. There’s a meme on Instagram about Jesus being the first influencer and how he killed himself or faked his death for more followers.

Because it’s fluid, it’s a lifeline for so many kids. It’s where their social network lives. It’s where identity expression occurs.

I just tweeted it out. I think a lot of people didn’t know the joke, the meme, and I think they just thought that it was new & funny. Also [some people] were saying, “how can you joke about Jesus wanting more followers?” I’m like, the Pope literally compared Mary to a social media influencer, so calm down. My whole family is Irish Catholic.

A bunch of people were sharing my tweet. I was like, oh, god. I’m not trying to lead into some religious controversy, but I did think whether my Irish Catholic mother would laugh. She has a really good sense of humor. I thought, I think she would laugh at this joke. I think it’s fine.

Greg: I loved it because it was a real Rorschach test for me. Sitting there looking at that tweet, I was one of the people who didn’t know that particular meme. I’d like to think I love my memes but …

Taylor: I can’t claim credit.

Greg: No, no, but anyway most of the memes I know are the ones my students happen to tell me about. The point is I’ve spent 15 plus years being a professional atheist. I’ve had my share of religious debates, but I also have had all these debates with others I’ll call Professional Strident Atheists.. who are more aggressive in their anti-religion than I am. And I’m thinking, “Okay, this is clearly a tweet that Richard Dawkins would love. Do I love it? I don’t know. Wait, I think I do!”

Taylor: I treated it with the greatest respect for all faiths. I thought it was funny to drag the Pope on Twitter .

The influence of Instagram

Alexander Spatari via Getty Images


Source: https://techcrunch.com/2019/03/24/the-ethics-of-internet-culture-a-conversation-with-taylor-lorenz/

Optimizing for Searcher Intent Explained in 7 Visuals

Posted by randfish

Ever get that spooky feeling that Google somehow knows exactly what you mean, even when you put a barely-coherent set of words in the search box? You’re not alone. The search giant has an uncanny ability to un-focus on the keywords in the search query and apply behavioral, content, context, and temporal/historical signals to give you exactly the answer you want.

For marketers and SEOs, this poses a frustrating challenge. Do we still optimize for keywords? The answer is “sort of.” But I think I can show you how to best think about this in a few quick visuals, using a single search query.

First… A short story.

I sent a tweet
over the weekend about an old Whiteboard Friday video. Emily Grossman,
longtime friend, all-around marketing genius, and
official-introducer-of-millenial-speak-to-GenXers-like-me replied.

Emily makes fun of Rand's mustache on Twitter

Ha ha Emily. I already made fun of my own mustache so…

Anywho, I searched Google for “soz.” Not because I didn’t know what it means. I can read between lines. I’m hip. But, you know, sometimes a Gen-Xer wants to make sure.

The results confirm my guess, but they also helped illustrate a point of frequent frustration I have when trying to explain modern vs. classic SEO. I threw together these seven visuals to illustrate.

There you have it friends. Classic SEO ranking inputs still matter. They can still help. They’re often the difference between making it to the top 10 vs. having no shot. But too many SEOs get locked into the idea that rankings are made up of a combination of the “Old School Five”:

  1. Keyword use
  2. Links to the page
  3. Domain authority
  4. Anchor text
  5. Freshness

Don’t get me wrong — sometimes, these signals in a powerful enough combination can overwhelm Google’s other inputs. But those examples are getting harder to find.

The three big takeaways for every marketer should be:

  1. Google is working hard to keep searchers on Google. If you help them do that, they’ll often help you rank (whether this is a worthwhile endeavor or a Prisoner’s Dilemma is another matter)
  2. When trying to reverse why something ranks in Google, add the element of “how well does this solve the searcher’s query”
  3. If you’re trying to outrank a competitor, how you align your title, meta description, first few sentences of text, and content around what the searcher truly wants can make the difference… even if you don’t win on links 😉

Related: if you want to see how hard Google’s working to keep searchers on their site vs. clicking results, I’ve got some research on SparkToro showing precisely that.

P.S. I don’t actually believe in arbitrary birth year ranges for segmenting cohorts of people. The differences between two individuals born in 1981 can be vastly wider than for two people born in 1979 and 1985. Boomer vs. Gen X vs. Millenial vs. Gen Z is crappy pseudoscience rooted in our unhealthy desire to categorize and pigeonhole others. Reject that ish.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Source: https://moz.com/blog/optimizing-for-searcher-intent-explained-in-7-visuals

Facebook staff raised concerns about Cambridge Analytica in September 2015, per court filing

Further details have emerged about when and how much Facebook knew about data-scraping by the disgraced and now defunct Cambridge Analytica political data firm.

Last year a major privacy scandal hit Facebook after it emerged CA had paid GSR, a developer with access to Facebook’s platform, to extract personal data on as many as 87M Facebook users without proper consents.

Cambridge Analytica’s intention was to use the data to build psychographic profiles of American voters to target political messages — with the company initially working for the Ted Cruz and later the Donald Trump presidential candidate campaigns.

But employees at Facebook appear to have raised internal concerns about CA scraping user data in September 2015 — i.e. months earlier than Facebook previously told lawmakers it became aware of the GSR/CA breach (December 2015).

The latest twist in the privacy scandal has emerged via a redacted court filing in the U.S. — where the District of Columbia is suing Facebook in a consumer protection enforcement case.

Facebook is seeking to have documents pertaining to the case sealed, while the District argues there is nothing commercially sensitive to require that.

In its opposition to Facebook’s motion to seal the document, the District includes a redacted summary (screengrabbed below) of the “jurisdictional facts” it says are contained in the papers Facebook is seeking to keep secret.

According to the District’s account a Washington D.C.-based Facebook employee warned others in the company about Cambridge Analytica’s data-scraping practices as early as September 2015.

Under questioning in Congress last April, Mark Zuckerberg was asked directly by congressman Mike Doyle when Facebook had first learned about Cambridge Analytica using Facebook data — and whether specifically it had learned about it as a result of the December 2015 Guardian article (which broke the story).

Zuckerberg responded with a “yes” to Doyle’s question.

Facebook repeated the same line to the UK’s Digital, Media and Sport (DCMA) committee last year, over a series of hearings with less senior staffers

Damian Collins, the chair of the DCMS committee — which made repeat requests for Zuckerberg himself to testify in front of its enquiry into online disinformation, only to be repeatedly rebuffed — tweeted yesterday that the new detail could suggest Facebook “consistently mislead” the British parliament.

https://platform.twitter.com/widgets.js

The DCMS committee has previously accused Facebook of deliberately misleading its enquiry on other aspects of the CA saga, with Collins taking the company to task for displaying a pattern of evasive behavior.

The earlier charge that it mislead the committee refers to a hearing in Washington in February 2018 — when Facebook sent its UK head of policy, Simon Milner, and its head of global policy management, Monika Bickert, to field DCMS’ questions — where the pair failed to inform the committee about a legal agreement Facebook had made with Cambridge Analytica in December 2015.

The committee’s final report was also damning of Facebook, calling for regulators to instigate antitrust and privacy probes of the tech giant.

Meanwhile, questions have continued to be raised about Facebook’s decision to hire GSR co-founder Joseph Chancellor, who reportedly joined the company around November 2015.

The question now is if Facebook knew there were concerns about CA data-scraping prior to hiring the co-founder of the company that sold scraped Facebook user data to CA, why did it go ahead and hire Chancellor?

The GSR co-founder has never been made available by Facebook to answer questions from politicians (or press) on either side of the pond.

Last fall he was reported to have quietly left Facebook, with no comment from Facebook on the reasons behind his departure — just as it had never explained why it hired him in the first place.

But the new timeline that’s emerged of what Facebook knew when makes those questions more pressing than ever.

Reached for a response to the details contained in the District of Columbia’s court filing, a Facebook spokeswomen sent us this statement:

Facebook was not aware of the transfer of data from Kogan/GSR to Cambridge Analytica until December 2015, as we have testified under oath

In September 2015 employees heard speculation that Cambridge Analytica was scraping data, something that is unfortunately common for any internet service. In December 2015, we first learned through media reports that Kogan sold data to Cambridge Analytica, and we took action. Those were two different things.

Facebook did not engage with questions about any of the details and allegations in the court filing.

A little later in the court filing, the District of Columbia writes that the documents Facebook is seeking to seal are “consistent” with its allegations that “Facebook has employees embedded within multiple presidential candidate campaigns who… knew, or should have known… [that] Cambridge Analytica [was] using the Facebook consumer data harvested by [[GSR’s]] [Aleksandr] Kogan throughout the 2016 [United States presidential] election.”

It goes on to suggest that Facebook’s concern to seal the document is “reputational”, suggesting — in another redacted segment (below) — that it might “reflect poorly” on Facebook that a DC-based employee had flagged Cambridge Analytica months prior to news reports of its improper access to user data.

“The company may also seek to avoid publishing its employees’ candid assessments of how multiple third-parties violated Facebook’s policies,” it adds, chiming with arguments made last year by GSR’s Kogan who suggested the company failed to enforce the terms of its developer policy, telling the DCMS committee it therefore didn’t have a “valid” policy.

As we’ve reported previously, the UK’s data protection watchdog — which has an ongoing investigation into CA’s use of Facebook data — was passed information by Facebook as part of that probe which showed that three “senior managers” had been involved in email exchanges, prior to December 2015, concerning the CA breach.

It’s not clear whether these exchanges are the same correspondence the District of Columbia has obtained and which Facebook is seeking to seal. Or whether there were multiple email threads raising concerns about the company.

The ICO passed the correspondence it obtained from Facebook to the DCMS committee — which last month said it had agreed at the request of the watchdog to keep the names of the managers confidential. (The ICO also declined to disclose the names or the correspondence when we made a Freedom of Information request last month — citing rules against disclosing personal data and its ongoing investigation into CA meaning the risk of release might be prejudicial to its investigation.)

In its final report the committee said this internal correspondence indicated “profound failure of governance within Facebook” — writing:

[I]t would seem that this important information was not shared with the most senior executives at Facebook, leading us to ask why this was the case. The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests.

We reached out to the ICO for comment on the information to emerge via the Columbia suit, and also to the Irish Data Protection Commission, the lead DPA for Facebook’s international business, which currently has 15 open investigations into Facebook or Facebook-owned businesses related to various security, privacy and data protection issues.

Last year the ICO issued Facebook with the maximum possible fine under UK law for the CA data breach.

Shortly after Facebook announced it would appeal, saying the watchdog had not found evidence that any UK users’ data was misused by CA.

A date for the hearing of the appeal set for earlier this week was canceled without explanation. A spokeswoman for the tribunal court told us a new date would appear on its website in due course.


Source: https://techcrunch.com/2019/03/22/facebook-staff-raised-concerns-about-cambridge-analytica-in-september-2015-per-court-filing/

The One-Hour Guide to SEO, Part 2: Keyword Research – Whiteboard Friday

Posted by randfish

Before doing any SEO work, it’s important to get a handle on your keyword research. Aside from helping to inform your strategy and structure your content, you’ll get to know the needs of your searchers, the search demand landscape of the SERPs, and what kind of competition you’re up against.

In the second part of the One-Hour Guide to SEO, the inimitable Rand Fishkin covers what you need to know about the keyword research process, from understanding its goals to building your own keyword universe map. Enjoy!

https://fast.wistia.net/assets/external/E-v1.js

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Howdy, Moz fans. Welcome to another portion of our special edition of Whiteboard Friday, the One-Hour Guide to SEO. This is Part II – Keyword Research. Hopefully you’ve already seen our SEO strategy session from last week. What we want to do in keyword research is talk about why keyword research is required. Why do I have to do this task prior to doing any SEO work?

The answer is fairly simple. If you don’t know which words and phrases people type into Google or YouTube or Amazon or Bing, whatever search engine you’re optimizing for, you’re not going to be able to know how to structure your content. You won’t be able to get into the searcher’s brain, into their head to imagine and empathize with them what they actually want from your content. You probably won’t do correct targeting, which will mean your competitors, who are doing keyword research, are choosing wise search phrases, wise words and terms and phrases that searchers are actually looking for, and you might be unfortunately optimizing for words and phrases that no one is actually looking for or not as many people are looking for or that are much more difficult than what you can actually rank for.

The goals of keyword research

So let’s talk about some of the big-picture goals of keyword research. 

Understand the search demand landscape so you can craft more optimal SEO strategies

First off, we are trying to understand the search demand landscape so we can craft better SEO strategies. Let me just paint a picture for you.

I was helping a startup here in Seattle, Washington, a number of years ago — this was probably a couple of years ago — called Crowd Cow. Crowd Cow is an awesome company. They basically will deliver beef from small ranchers and small farms straight to your doorstep. I personally am a big fan of steak, and I don’t really love the quality of the stuff that I can get from the store. I don’t love the mass-produced sort of industry around beef. I think there are a lot of Americans who feel that way. So working with small ranchers directly, where they’re sending it straight from their farms, is kind of an awesome thing.

But when we looked at the SEO picture for Crowd Cow, for this company, what we saw was that there was more search demand for competitors of theirs, people like Omaha Steaks, which you might have heard of. There was more search demand for them than there was for “buy steak online,” “buy beef online,” and “buy rib eye online.” Even things like just “shop for steak” or “steak online,” these broad keyword phrases, the branded terms of their competition had more search demand than all of the specific keywords, the unbranded generic keywords put together.

That is a very different picture from a world like “soccer jerseys,” where I spent a little bit of keyword research time today looking, and basically the brand names in that field do not have nearly as much search volume as the generic terms for soccer jerseys and custom soccer jerseys and football clubs’ particular jerseys. Those generic terms have much more volume, which is a totally different kind of SEO that you’re doing. One is very, “Oh, we need to build our brand. We need to go out into this marketplace and create demand.” The other one is, “Hey, we need to serve existing demand already.”

So you’ve got to understand your search demand landscape so that you can present to your executive team and your marketing team or your client or whoever it is, hey, this is what the search demand landscape looks like, and here’s what we can actually do for you. Here’s how much demand there is. Here’s what we can serve today versus we need to grow our brand.

Create a list of terms and phrases that match your marketing goals and are achievable in rankings

The next goal of keyword research, we want to create a list of terms and phrases that we can then use to match our marketing goals and achieve rankings. We want to make sure that the rankings that we promise, the keywords that we say we’re going to try and rank for actually have real demand and we can actually optimize for them and potentially rank for them. Or in the case where that’s not true, they’re too difficult or they’re too hard to rank for. Or organic results don’t really show up in those types of searches, and we should go after paid or maps or images or videos or some other type of search result.

Prioritize keyword investments so you do the most important, high-ROI work first

We also want to prioritize those keyword investments so we’re doing the most important work, the highest ROI work in our SEO universe first. There’s no point spending hours and months going after a bunch of keywords that if we had just chosen these other ones, we could have achieved much better results in a shorter period of time.

Match keywords to pages on your site to find the gaps

Finally, we want to take all the keywords that matter to us and match them to the pages on our site. If we don’t have matches, we need to create that content. If we do have matches but they are suboptimal, not doing a great job of answering that searcher’s query, well, we need to do that work as well. If we have a page that matches but we haven’t done our keyword optimization, which we’ll talk a little bit more about in a future video, we’ve got to do that too.

Understand the different varieties of search results

So an important part of understanding how search engines work — we’re going to start down here and then we’ll come back up — is to have this understanding that when you perform a query on a mobile device or a desktop device, Google shows you a vast variety of results. Ten or fifteen years ago this was not the case. We searched 15 years ago for “soccer jerseys,” what did we get? Ten blue links. I think, unfortunately, in the minds of many search marketers and many people who are unfamiliar with SEO, they still think of it that way. How do I rank number one? The answer is, well, there are a lot of things “number one” can mean today, and we need to be careful about what we’re optimizing for.

So if I search for “soccer jersey,” I get these shopping results from Macy’s and soccer.com and all these other places. Google sort has this sliding box of sponsored shopping results. Then they’ve got advertisements below that, notated with this tiny green ad box. Then below that, there are couple of organic results, what we would call classic SEO, 10 blue links-style organic results. There are two of those. Then there’s a box of maps results that show me local soccer stores in my region, which is a totally different kind of optimization, local SEO. So you need to make sure that you understand and that you can convey that understanding to everyone on your team that these different kinds of results mean different types of SEO.

Now I’ve done some work recently over the last few years with a company called Jumpshot. They collect clickstream data from millions of browsers around the world and millions of browsers here in the United States. So they are able to provide some broad overview numbers collectively across the billions of searches that are performed on Google every day in the United States.

Click-through rates differ between mobile and desktop

The click-through rates look something like this. For mobile devices, on average, paid results get 8.7% of all clicks, organic results get about 40%, a little under 40% of all clicks, and zero-click searches, where a searcher performs a query but doesn’t click anything, Google essentially either answers the results in there or the searcher is so unhappy with the potential results that they don’t bother taking anything, that is 62%. So the vast majority of searches on mobile are no-click searches.

On desktop, it’s a very different story. It’s sort of inverted. So paid is 5.6%. I think people are a little savvier about which result they should be clicking on desktop. Organic is 65%, so much, much higher than mobile. Zero-click searches is 34%, so considerably lower.

There are a lot more clicks happening on a desktop device. That being said, right now we think it’s around 60–40, meaning 60% of queries on Google, at least, happen on mobile and 40% happen on desktop, somewhere in those ranges. It might be a little higher or a little lower.

The search demand curve

Another important and critical thing to understand about the keyword research universe and how we do keyword research is that there’s a sort of search demand curve. So for any given universe of keywords, there is essentially a small number, maybe a few to a few dozen keywords that have millions or hundreds of thousands of searches every month. Something like “soccer” or “Seattle Sounders,” those have tens or hundreds of thousands, even millions of searches every month in the United States.

But people searching for “Sounders FC away jersey customizable,” there are very, very few searches per month, but there are millions, even billions of keywords like this. 

The long-tail: millions of keyword terms and phrases, low number of monthly searches

When Sundar Pichai, Google’s current CEO, was testifying before Congress just a few months ago, he told Congress that around 20% of all searches that Google receives each day they have never seen before. No one has ever performed them in the history of the search engines. I think maybe that number is closer to 18%. But that is just a remarkable sum, and it tells you about what we call the long tail of search demand, essentially tons and tons of keywords, millions or billions of keywords that are only searched for 1 time per month, 5 times per month, 10 times per month.

The chunky middle: thousands or tens of thousands of keywords with ~50–100 searches per month

If you want to get into this next layer, what we call the chunky middle in the SEO world, this is where there are thousands or tens of thousands of keywords potentially in your universe, but they only have between say 50 and a few hundred searches per month.

The fat head: a very few keywords with hundreds of thousands or millions of searches

Then this fat head has only a few keywords. There’s only one keyword like “soccer” or “soccer jersey,” which is actually probably more like the chunky middle, but it has hundreds of thousands or millions of searches. The fat head is higher competition and broader intent.

Searcher intent and keyword competition

What do I mean by broader intent? That means when someone performs a search for “soccer,” you don’t know what they’re looking for. The likelihood that they want a customizable soccer jersey right that moment is very, very small. They’re probably looking for something much broader, and it’s hard to know exactly their intent.

However, as you drift down into the chunky middle and into the long tail, where there are more keywords but fewer searches for each keyword, your competition gets much lower. There are fewer people trying to compete and rank for those, because they don’t know to optimize for them, and there’s more specific intent. “Customizable Sounders FC away jersey” is very clear. I know exactly what I want. I want to order a customizable jersey from the Seattle Sounders away, the particular colors that the away jersey has, and I want to be able to put my logo on there or my name on the back of it, what have you. So super specific intent.

Build a map of your own keyword universe

As a result, you need to figure out what the map of your universe looks like so that you can present that, and you need to be able to build a list that looks something like this. You should at the end of the keyword research process — we featured a screenshot from Moz’s Keyword Explorer, which is a tool that I really like to use and I find super helpful whenever I’m helping companies, even now that I have left Moz and been gone for a year, I still sort of use Keyword Explorer because the volume data is so good and it puts all the stuff together. However, there are two or three other tools that a lot of people like, one from Ahrefs, which I think also has the name Keyword Explorer, and one from SEMrush, which I like although some of the volume numbers, at least in the United States, are not as good as what I might hope for. There are a number of other tools that you could check out as well. A lot of people like Google Trends, which is totally free and interesting for some of that broad volume data.



So I might have terms like “soccer jersey,” “Sounders FC jersey”, and “custom soccer jersey Seattle Sounders.” Then I’ll have these columns: 

  • Volume, because I want to know how many people search for it; 
  • Difficulty, how hard will it be to rank. If it’s super difficult to rank and I have a brand-new website and I don’t have a lot of authority, well, maybe I should target some of these other ones first that are lower difficulty. 
  • Organic Click-through Rate, just like we talked about back here, there are different levels of click-through rate, and the tools, at least Moz’s Keyword Explorer tool uses Jumpshot data on a per keyword basis to estimate what percent of people are going to click the organic results. Should you optimize for it? Well, if the click-through rate is only 60%, pretend that instead of 100 searches, this only has 60 or 60 available searches for your organic clicks. Ninety-five percent, though, great, awesome. All four of those monthly searches are available to you.
  • Business Value, how useful is this to your business? 
  • Then set some type of priority to determine. So I might look at this list and say, “Hey, for my new soccer jersey website, this is the most important keyword. I want to go after “custom soccer jersey” for each team in the U.S., and then I’ll go after team jersey, and then I’ll go after “customizable away jerseys.” Then maybe I’ll go after “soccer jerseys,” because it’s just so competitive and so difficult to rank for. There’s a lot of volume, but the search intent is not as great. The business value to me is not as good, all those kinds of things.
  • Last, but not least, I want to know the types of searches that appear — organic, paid. Do images show up? Does shopping show up? Does video show up? Do maps results show up? If those other types of search results, like we talked about here, show up in there, I can do SEO to appear in those places too. That could yield, in certain keyword universes, a strategy that is very image centric or very video centric, which means I’ve got to do a lot of work on YouTube, or very map centric, which means I’ve got to do a lot of local SEO, or other kinds like this.

Once you build a keyword research list like this, you can begin the prioritization process and the true work of creating pages, mapping the pages you already have to the keywords that you’ve got, and optimizing in order to rank. We’ll talk about that in Part III next week. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Source: https://moz.com/blog/one-hour-seo-guide-part-2-keyword-research

Facebook’s AI couldn’t spot mass murder

Facebook has given another update on measures it took and what more it’s doing in the wake of the livestreamed video of a gun massacre by a far right terrorist who killed 50 people in two mosques in Christchurch, New Zealand.

Earlier this week the company said the video of the slayings had been viewed less than 200 times during the livestream broadcast itself, and about about 4,000 times before it was removed from Facebook — with the stream not reported to Facebook until 12 minutes after it had ended.

None of the users who watched the killings unfold on the company’s platform in real-time apparently reported the stream to the company, according to the company.

It also previously said it removed 1.5 million versions of the video from its site in the first 24 hours after the livestream, with 1.2M of those caught at the point of upload — meaning it failed to stop 300,000 uploads at that point. Though as we pointed out in our earlier report those stats are cherrypicked — and only represent the videos Facebook identified. We found other versions of the video still circulating on its platform 12 hours later.

In the wake of the livestreamed terror attack, Facebook has continued to face calls from world leaders to do more to make sure such content cannot be distributed by its platform.

The prime minister of New Zealand, Jacinda Ardern told media yesterday that the video “should not be distributed, available, able to be viewed”, dubbing it: “Horrendous.”

She confirmed Facebook had been in contact with her government but emphasized that in her view the company has not done enough.

She also later told the New Zealand parliament: “We cannot simply sit back and accept that these platforms just exist and that what is said on them is not the responsibility of the place where they are published. They are the publisher. Not just the postman.”

We asked Facebook for a response to Ardern’s call for online content platforms to accept publisher-level responsibility for the content they distribute. Its spokesman avoided the question — pointing instead to its latest piece of crisis PR which it titles: “A Further Update on New Zealand Terrorist Attack”.

Here it writes that “people are looking to understand how online platforms such as Facebook were used to circulate horrific videos of the terrorist attack”, saying it therefore “wanted to provide additional information from our review into how our products were used and how we can improve going forward”, before going on to reiterate many of the details it has previously put out.

Including that the massacre video was quickly shared to the 8chan message board by a user posting a link to a copy of the video on a file-sharing site. This was prior to Facebook itself being alerted to the video being broadcast on its platform.

It goes on to imply 8chan was a hub for broader sharing of the video — claiming that: “Forensic identifiers on many of the videos later circulated, such as a bookmarks toolbar visible in a screen recording, match the content posted to 8chan.”

So it’s clearly trying to make sure it’s not singled out by political leaders seek policy responses to the challenge posed by online hate and terrorist content.

Further details it chooses to dwell on in the update is how the AIs it uses to aid the human content review process of flagged Facebook Live streams are in fact tuned to “detect and prioritize videos that are likely to contain suicidal or harmful acts” — with the AI pushing such videos to the top of human moderators’ content heaps, above all the other stuff they also need to look at.

Clearly “harmful acts” were involved in the New Zealand terrorist attack. Yet Facebook’s AI was unable to detected a massacre unfolding in real time. A mass killing involving an automatic weapon slipped right under the robot’s radar.

Facebook explains this by saying it’s because it does not have the training data to create an algorithm that understands it’s looking at mass murder unfolding in real time.

It also implies the task of training an AI to catch such a horrific scenario is exacerbated by the proliferation of videos of first person shooter videogames on online content platforms.

It writes: “[T]his particular video did not trigger our automatic detection systems. To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare. Another challenge is to automatically discern this content from visually similar, innocuous content – for example if thousands of videos from live-streamed video games are flagged by our systems, our reviewers could miss the important real-world videos where we could alert first responders to get help on the ground.”

The videogame element is a chilling detail to consider.

It suggests that a harmful real-life act that mimics a violent video game might just blend into the background, as far as AI moderation systems are concerned; invisible in a sea of innocuous, virtually violent content churned out by gamers. (Which in turn makes you wonder whether the Internet-steeped killer in Christchurch knew — or suspected — that filming the attack from a videogame-esque first person shooter perspective might offer a workaround to dupe Facebook’s imperfect AI watchdogs.)

Facebook post is doubly emphatic that AI is “not perfect” and is “never going to be perfect”.

“People will continue to be part of the equation, whether it’s the people on our team who review content, or people who use our services and report content to us,” it writes, reiterating yet again that it has ~30,000 people working in “safety and security”, about half of whom are doing the sweating hideous toil of content review.

This is, as we’ve said many times before, a fantastically tiny number of human moderators given the vast scale of content continually uploaded to Facebook’s 2.2BN+ user platform.

Moderating Facebook remains a hopeless task because so few humans are doing it.

Moreover AI can’t really help. (Later in the blog post Facebook also writes vaguely that there are “millions” of livestreams broadcast on its platform every day, saying that’s why adding a short broadcast delay — such as TV stations do — wouldn’t at all help catch inappropriate real-time content.)

At the same time Facebook’s update makes it clear how much its ‘safety and security’ systems rely on unpaid humans too: Aka Facebook users taking the time and mind to report harmful content.

Some might say that’s an excellent argument for a social media tax.

The fact Facebook did not get a single report of the Christchurch massacre livestream while the terrorist attack unfolded meant the content was not prioritized for “accelerated review” by its systems, which it explains prioritize reports attached to videos that are still being streamed — because “if there is real-world harm we have a better chance to alert first responders and try to get help on the ground”.

Though it also says it expanded its acceleration logic last year to “also cover videos that were very recently live, in the past few hours”.

But again it did so with a focus on suicide prevention — meaning the Christchurch video would only have been flagged for acceleration review in the hours after the stream ended if it had been reported as suicide content.

So the ‘problem’ is that Facebook’s systems don’t prioritize mass murder.

“In [the first] report, and a number of subsequent reports, the video was reported for reasons other than suicide and as such it was handled according to different procedures,” it writes, adding it’s “learning from this” and “re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review”.

No shit.

Facebook also discusses its failure to stop versions of the massacre video from resurfacing on its platform, having been — as it tells it — “so effective” at preventing the spread of propaganda from terrorist organizations like ISIS with the use of image and video matching tech.

It claims  its tech was outfoxed in this case by “bad actors” creating many different edited versions of the video to try to thwart filters, as well as by the various ways “a broader set of people distributed the video and unintentionally made it harder to match copies”.

So, essentially, the ‘virality’ of the awful event created too many versions of the video for Facebook’s matching tech to cope.

“Some people may have seen the video on a computer or TV, filmed that with a phone and sent it to a friend. Still others may have watched the video on their computer, recorded their screen and passed that on. Websites and pages, eager to get attention from people seeking out the video, re-cut and re-recorded the video into various formats,” it writes, in what reads like another attempt to spread blame for the amplification role that its 2.2BN+ user platform plays.

In all Facebook says it found and blocked more than 800 visually-distinct variants of the video that were circulating on its platform.

It reveals it resorted to using audio matching technology to try to detect videos that had been visually altered but had the same soundtrack. And again claims it’s trying to learn and come up with better techniques for blocking content that’s being re-shared widely by individuals as well as being rebroadcast by mainstream media. So any kind of major news event, basically.

In a section on next steps Facebook says improving its matching technology to prevent the spread of inappropriate viral videos being spread is its priority.

But audio matching clearly won’t help if malicious re-sharers just both re-edit the visuals and switch the soundtrack too in future.

It also concedes it needs to be able to react faster “to this kind of content on a live streamed video” — though it has no firm fixes to offer there either, saying only that it will explore “whether and how AI can be used for these cases, and how to get to user reports faster”.

Another priority it claims among its “next steps” is fighting “hate speech of all kinds on our platform”, saying this includes more than 200 white supremacist organizations globally “whose content we are removing through proactive detection technology”.

It’s glossing over plenty of criticism on that front too though — including research that suggests banned far right hate preachers are easily able to evade detection on its platform. Plus its own foot-dragging on shutting down far right extremists. (Facebook only finally banned one infamous UK far right activist last month, for example.)

In its last PR sop, Facebook says it’s committed to expanding its industry collaboration to tackle hate speech via the Global Internet Forum to Counter Terrorism (GIFCT), which formed in 2017 as platforms were being squeezed by politicians to scrub ISIS content — in a collective attempt to stave off tighter regulation.

“We are experimenting with sharing URLs systematically rather than just content hashes, are working to address the range of terrorists and violent extremists operating online, and intend to refine and improve our ability to collaborate in a crisis,” Facebook writes now, offering more vague experiments as politicians call for content responsibility.


Source: https://techcrunch.com/2019/03/21/facebooks-ai-couldnt-spot-mass-murder/

Facebook admits it stored ‘hundreds of millions’ of account passwords in plaintext

Flip the “days since last Facebook security incident” back to zero.

Facebook confirmed Thursday in a blog post, prompted by a report by cybersecurity reporter Brian Krebs, that it stored “hundreds of millions” of account passwords in plaintext for years.

The discovery was made in January, said Facebook’s Pedro Canahuati, as part of a routine security review. None of the passwords were visible to anyone outside Facebook, he said. Facebook admitted the security lapse months later, after Krebs said logs were accessible to some 2,000 engineers and developers.

Krebs said the bug dated back to 2012.

“This caught our attention because our login systems are designed to mask passwords using techniques that make them unreadable,” said Canahuati. “We have found no evidence to date that anyone internally abused or improperly accessed them,” but did not say how the company made that conclusion.

Facebook said it will notify “hundreds of millions of Facebook Lite users,” a lighter version of Facebook for users where internet speeds are slow and bandwidth is expensive, and “tens of millions of other Facebook users.” The company also said “tens of thousands of Instagram users” will be notified of the exposure.

Krebs said as many as 600 million users could be affected — about one-fifth of the company’s 2.7 billion users, but Facebook has yet to confirm the figure.

Facebook also didn’t say how the bug came to be. Storing passwords in readable plaintext is an insecure way of storing passwords. Companies, like Facebook, hash and salt passwords — two ways of further scrambling passwords — to store passwords securely. That allows companies to verify a user’s password without knowing what it is.

Twitter and GitHub were hit by similar but independent bugs last year. Both companies said passwords were stored in plaintext and not scrambled.

It’s the latest in a string of embarrassing security issues at the company, prompting congressional inquiries and government investigations. It was reported last week that Facebook’s deals that allowed other tech companies to access account data without consent was under criminal investigation.

It’s not known why Facebook took months to confirm the incident, or if the company informed state or international regulators per U.S. breach notification and European data protection laws. We asked Facebook but a spokesperson did not immediately comment beyond the blog post.

We’ve contacted the Irish data protection office, which covers Facebook’s European operations, but did not hear back.


Source: https://techcrunch.com/2019/03/21/facebook-plaintext-passwords/

PicsArt hits 130 million MAUs as Chinese flock to its photo-editing app

If you’re like me, who isn’t big on social media, you’d think that the image filters that come inside most apps will do the job. But for many others, especially the younger crowd, making their photos stand out is a huge deal.

The demand is big enough that PicsArt, a rival to filtering companies VSCO and Snapseed, recently hit 130 million monthly active users worldwide, roughly a year after it amassed 100 million MAUs. Like VSCO, PicsArt now offers video overlays, though images are still its focus.

Nearly 80 percent of PicsArt’s users are under the age of 35, and those younger than 18 are driving most of its growth. The “Gen Z” (the generation after millennials) users aren’t obsessed with the next big, big thing. Rather, they pride themselves on having niche interests, be it K-pop, celebrities, anime, sci-fi or space science, topics that come in the form of filters, effects, stickers and GIFs in PicsArt’s content library.

“PicsArt is helping to drive a trend I call visual storytelling. There’s a generation of young people who communicate through memes, short-form videos, images and stickers, and they rarely use words,” Tammy Nam, who joined PicsArt as its chief operating officer in July, told TechCrunch in an interview.

PicsArt has so far raised $45 million, according to data collected by Crunchbase. It picked up $20 million from a Series B round in 2016 to grow its Asia focus and told TechCrunch that it’s “actively considering fundraising to fuel [its] rapid growth even more.”picsart

PicsArt wants to help users stand out on social media, for instance, by virtually applying this rainbow makeup look on them. Image: PicsArt via WeiboThe app doubles as a social platform, although the use case is much smaller compared to the size of Instagram, Facebook and other mainstream social media products. About 40 percent of PicsArt’s users post on the app, putting it in a unique position where it competes with the social media juggernauts on one hand, and serves as a platform-agnostic app to facilitate content creation for its rivals on the other.

What separates PicsArt from the giants, according to Nam, is that people who do share there tend to be content creators rather than passive consumers.

“On TikTok and Instagram, the majority of the people there are consumers. Almost 100 percent of the people on PicsArt are creating or editing something. For many users, coming on PicsArt is a built-in habit. They come in every week, and find the editing process Zen-like and peaceful.”

Trending in China

Most of PicsArt’s users live in the United States, but the app owes much of its recent success to China, its fastest growing market with more than 15 million MAUs. The regional growth, which has been 10-30 percent month-over-month recently, appears more remarkable when factoring in PicsArt’s zero user acquisition expense in a crowded market where pay-to-play is a norm for emerging startups.

“Many larger companies [in China] are spending a lot of money on advertising to gain market share. PicsArt has done zero paid marketing in China,” noted Nam.

Screenshot: TikTok-related stickers from PicsArt’s libraryWhen people catch sight of an impressive image filtering effect online, many will inquire about the toolset behind it. Chinese users find out about the Armenian startup from photos and videos hashtagged #PicsArt, not different from how VSCO gets discovered from #vscocam on Instagram . It’s through such word of mouth that PicsArt broke into China, where users flocked to its Avengers-inspired disappearing superhero effect last May when the film was screening. China is now the company’s second largest market by revenue after the U.S.

Screenshot: PicsArt lets users easily apply the Avengers dispersion effect to their own photosA hurdle that all media apps see in China is the country’s opaque guidelines on digital content. Companies in the business of disseminating information, from WeChat to TikTok, hire armies of content moderators to root out what the government deems inappropriate or illegal. PicsArt says it uses artificial intelligence to sterilize content and keeps a global moderator team that also keeps an eye on its China content.

Despite being headquartered in Silicon Valley, PicsArt has placed its research and development center in Armenia, home to founder Hovhannes Avoyan. This gives the startup access to much cheaper engineering talents in the country and neighboring Russia compared to what it can hire in the U.S. To date, 70 percent of the company’s 360 employees are working in engineering and product development (50 percent of whom are female), an investment it believes helps keep its creative tools up to date.

Most of PicsArt’s features are free to use, but the firm has also looked into getting paid. It rolled out a premium program last March that gives users more sophisticated functions and exclusive content. This segment has already leapfrogged advertising to be PicsArt’s largest revenue source, although in China, its budding market, paid subscriptions have been slow to come.picsart 1

PicsArt lets users do all sorts of creative work, including virtually posing with their idol. Image: PicsArt via Weibo”In China, people don’t want to pay because they don’t believe in the products. But if they understand your value, they are willing to pay, for example, they pay a lot for mobile games,” said Jennifer Liu, PicsArt China’s country manager.

And Nam is positive that Chinese users will come to appreciate the app’s value. “In order for this new generation to create really differentiated content, become influencers, or be more relevant on social media, they have to do edit their content. It’s just a natural way for them to do that.”


Source: https://techcrunch.com/2019/03/20/picsart-china/

Slowdown or not, China’s luxury goods still seeing high-end growth

Despite well-documented concerns over an economic slowdown in China, the country’s luxury goods market is still seeing opulent growth according to a new study. Behind secular and demographic tailwinds, the luxury sector is set to continue its torrid expansion in the face of volatility as it’s quickly becoming a defensive economic crown jewel.

Using proprietary analysis, company data, primary source interviews, and third-party research, Bain & Company dug into the ongoing expansion of China’s high-end market in a report titled “What’s Powering China’s Market for Luxury Goods?

In recent years, China has become one of the largest markets for luxury good companies globally. And while many have raised concern around a drop-off in luxury demand, findings in the report point to the contrary, with Bain forecasting material growth throughout 2019 and beyond. The analysis provides a compelling breakdown of how the sector has seen and will see continued development, as well as a fascinating examination of what strategies separate winners and losers in the space.

The report is worth a quick read, as it manages to provide several insightful and differentiated data points with relative brevity, but here are the most interesting highlights in our view:


Source: https://techcrunch.com/2019/03/21/slowdown-or-not-chinas-luxury-goods-still-seeing-high-end-growth/

SEO Is a Means to an End: How Do You Prove Your Value to Clients?

Posted by KameronJenkins

“Prove it” is pretty much the name of the game at this point.

As SEOs, we invest so much effort into finding opportunities for our clients, executing strategies, and on the best days, getting the results we set out to achieve.

That’s why it feels so deflating (not to mention mind-boggling) when, after all those increases in rankings, traffic, and conversions our work produced, our clients still aren’t satisfied.

Where’s the disconnect?

The value of SEO in today’s search landscape

You don’t have to convince SEOs that their work is valuable. We know full well how our work benefits our clients’ websites.

  1. Our attention on crawling and indexing ensures that search engine bots crawl all our clients’ important pages, that they’re not wasting time on any unimportant pages, and that only the important, valuable pages are in the index.
  2. Because we understand how Googlebot and other crawlers work, we’re cognizant of how to ensure that search engines understand our pages as they’re intended to be understood, as well as able to eliminate any barriers to that understanding (ex: adding appropriate structured data, diagnosing JavaScript issues, etc.)
  3. We spend our time improving speed, ensuring appropriate language targeting, looking into UX issues, ensuring accessibility, and more because we know the high price that Google places on the searcher experience.
  4. We research the words and phrases that our clients’ ideal customers use to search for solutions to their problems and help create content that satisfies those needs. In turn, Google rewards our clients with high rankings that capture clicks. Over time, this can lower our clients’ customer acquisition costs.
  5. Time spent on earning links for our clients earns them the authority needed to earn trust and perform well in search results.

There are so many other SEO activities that drive real, measurable impact for our clients, even in a search landscape that is more crowded and getting less clicks than ever before. Despite those results, we’ll still fall short if we fail to connect the dots for our clients.

Rankings, traffic, conversions… what’s missing?

What’s a keyword ranking worth without clicks?

What’s organic traffic worth without conversions?

What are conversions worth without booking/signing the lead?

Rankings, traffic, and conversions are all critical SEO metrics to track if you want to prove the success of your efforts, but they are all means to an end.

At the end of the day, what your client truly cares about is their return on investment (ROI). In other words, if they can’t mentally make the connection between your SEO results and their revenue, then the client might not keep you around for long.

From searcher to customer: I made this diagram for a past client to help demonstrate how they get revenue from SEO.

But how can you do that?

10 tips for attaching value to organic success

If you want to help your clients get a clearer picture of the real value of your efforts, try some of the following methods.

1. Know what constitutes a conversion

What’s the main action your client wants people to take on their website? This is usually something like a form fill, a phone call, or an on-site purchase (e-commerce). Knowing how your client uses their website to make money is key.

2. Ask your clients what their highest value jobs are

Know what types of jobs/purchases your client is prioritizing so you can prioritize them too. It’s common for clients to want to balance their “cash flow” jobs (usually lower value but higher volume) with their “big time” jobs (higher value but lower volume). You can pay special attention to performance and conversions on these pages.

3. Know your client’s close rate

How many of the leads your campaigns generate end up becoming customers? This will help you assign values to goals (tip #6).

4. Know your client’s average customer value

This can get tricky if your client offers different services that all have different values, but you can combine average customer value with close rate to come up with a monetary value to attach to goals (tip #6).

5. Set up goals in Google Analytics

Once you know what constitutes a conversion on your client’s website (tip #1), you can set up a goal in Google Analytics. If you’re not sure how to do this, read up on Google’s documentation.

6. Assign goal values

Knowing that the organic channel led to a conversion is great, but knowing the estimated value of that conversion is even better! For example, if you know that your client closes 10% of the leads that come through contact forms, and the average value of their customers is $500, you could assign a value of $50 per goal completion.

7. Consider having an Organic-only view in Google Analytics

For the purpose of clarity, it could be valuable to set up an additional Google Analytics view just for your client’s organic traffic. That way, when you’re looking at your goal report, you know you’re checking organic conversions and value only.

8. Calculate how much you would have had to pay for that traffic in Google Ads

I like to use the Keywords Everywhere plugin when viewing Google Search Console performance reports because it adds a cost per click (CPC) column next to your clicks column. This screenshot is from a personal blog website that I admittedly don’t do much with, hence the scant metrics, but you can see how easy this makes it to calculate how much you would have had to pay for the clicks you got your client for “free” (organically).

9. Use Multi-Channel Funnels

Organic has value beyond last-click! Even when it’s not the channel your client’s customer came through, organic may have assisted in that conversion. Go to Google Analytics > Conversions > Multi-Channel Funnels.

10. Bring all your data together

How you communicate all this data is just as important as the data itself. Use smart visualizations and helpful explanations to drive home the impact your work had on your client’s bottom line.


As many possibilities as we have for proving our value, doing so can be difficult and time-consuming. Additional factors can even complicate this further, such as:

  • Client is using multiple methods for customer acquisition, each with its own platform, metrics, and reporting
  • Client has low SEO maturity
  • Client is somewhat disorganized and doesn’t have a good grasp of things like average customer value or close rate

The challenges can seem endless, but there are ways to make this easier. I’ll be co-hosting a webinar on March 28th that focuses on this very topic. If you’re looking for ways to not only add value as an SEO but also prove it, check it out:

Save my spot!

And let’s not forget, we’re in this together! If you have any tips for showing your value to your SEO clients, share them in the comments below.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Source: https://moz.com/blog/seo-value-for-clients

PicsArt hits 130 million MAUs as Chinese flock to its photo editing app

If you’re like me, who isn’t big on social media, you’d think that the image filters that come inside most apps will do the job. But for many others, especially the younger crowd, making their photos stand out is a huge deal.

The demand is big enough that PicsArt, a rival to filtering companies VSCO and Snapseed, recently hit 130 million monthly active users worldwide, roughly a year after it amassed 100 million MAUs. Like VSCO, PicsArt now offers video overlays though images are still its focus.

Nearly 80 percent of PicsArt’s users are under the age of 35 and those under 18 are driving most of its growth. The “Gen Z” (the generation after millennials) users aren’t obsessed with the next big, big thing. Rather, they pride themselves on having niche interests, be it K-pop, celebrities, anime, sci-fi or space science, topics that come in the form of filters, effects, stickers and GIFs in PicsArt’s content library.

“PicsArt is helping to drive a trend I call visual storytelling. There’s a generation of young people who communicate through memes, short-form videos, images and stickers, and they rarely use words,” Tammy Nam, who joined PicsArt as its chief operating officer in July, told TechCrunch in an interview.

PicsArt has so far raised $45 million, according to data collected by Crunchbase. It picked up $20 million from a Series B round in 2016 to grow its Asia focus and told TechCrunch that it’s “actively considering fundraising to fuel [its] rapid growth even more.”

picsart

PicsArt wants to help users stand out on social media, for instance, by virtually applying this rainbow makeup look on them. / Image: PicsArt via Weibo

The app doubles as a social platform, although the use case is much smaller compared to the size of Instagram, Facebook and other mainstream social media products. About 40 percent of PicsArt’s users post on the app, putting it in a unique position where it competes with the social media juggernauts on one hand, and serving as a platform-agnostic app to facilitate content creation for its rivals on the other.

What separates PicsArt from the giants, according to Nam, is that people who do share there tend to be content creators rather than passive consumers.

“On TikTok and Instagram, the majority of the people there are consumers. Almost 100 percent of the people on PicsArt are creating or editing something. For many users, coming on PicsArt is a built-in habit. They come in every week, and find the editing process Zen-like and peaceful.”

Trending in China

Most of PicsArt’s users live in the United States, but the app owes much of its recent success to China, its fastest growing market with more than 15 million users. The regional growth, which has been 10-30 percent month-over-month recently, appears more remarkable when factoring in PicsArt’s zero user acquisition expense in a crowded market where pay-to-play is a norm for emerging startups.

“Many larger companies [in China] are spending a lot of money on advertising to gain market share. PicsArt has done zero paid marketing in China,” noted Nam.

Screenshot: TikTok-related stickers from PicsArt’s library

When people catch sight of an impressive image filtering effect online, many will inquire about the toolset behind it. Chinese users find out about the Armenian startup from photos and videos hashtagged #PicsArt, not different from how VSCO gets discovered from #vscocam on Instagram. It’s through such word of mouth that PicsArt broke into China, where users flocked to its Avengers-inspired disappearing superhero effect last May when the film was screening. China is now the company’s second largest market by revenue after the U.S.

Screenshot: PicsArts lets users easily apply the Avengers dispersion effect to their own photos

A hurdle that all media apps see in China is the country’s opaque guidelines on digital content. Companies in the business of disseminating information, from WeChat to TikTok, hire armies of content moderators to root out what the government deems inappropriate or illegal. PicsArt says it uses artificial intelligence to sterilize content and keeps a global moderator team that also keeps an eye on its China content.

Despite being headquartered in Silicon Valley, PicsArt has placed its research and development center in Armenia, home to founder Hovhannes Avoyan. This gives the startup access to much cheaper engineering talents in the country and neighboring Russia compared to what it can hire in the U.S. To date, 70 percent of the company’s 360 employees are working in engineering and product development (50 percent of whom are female), an investment it believes helps keep its creative tools up to date.

Most of PicsArt’s features are free to use, but the firm has also looked into getting paid. It rolled out a premium program last March that gives users more sophisticated functions and exclusive content. This segment has already leapfrogged advertising to be PicsArt’s largest revenue source, although in China, its budding market, paid subscriptions have been slow to come.

picsart 1

PicsArt lets users do all sorts of creative work, including virtually posing with their idol. / Image: PicsArt via Weibo

“In China, people don’t want to pay because they don’t believe in the products. But if they understand your value, they are willing to pay, for example, they pay a lot for mobile games,” said Jennifer Liu, PicsArt China’s country manager.

And Nam is positive that Chinese users will come to appreciate the app’s value. “In order for this new generation to create really differentiated content, become influencers, or be more relevant on social media, they have to do edit their content. It’s just a natural way for them to do that.”


Source: https://techcrunch.com/2019/03/20/picsart-china/