After succumbing to her curiosity and peeking in the box, Pandora tries to quickly close the top as creatures representing evil and disease escape.
It’s hard to believe that Facebook only came into existence in February 2004–just 15 years ago. Once named thefacebook.com, it began a communication revolution which has put social media at the front and center of many parts of our daily lives. Whether we use Messenger to talk to friends, Instagram to follow our favorite influencer or Pinterest to find a trending product, social media is everywhere.
Negative headlines about data privacy and streams of egregious content have been flashing warning signs about social media for some time. As the manager of a digital marketing agency, here are a few cautionary signs that I see which tell me rigorous regulation of this industry is long overdue, and when it does arrive, it will be a welcome reprieve.
1. Influencer marketing means what you see is not what you get
Called brand partnerships, social media influencers often get paid to blog and post about products. As a rule of thumb, every follower an influencer has equates to a penny. Therefore, an influencer with 10,000 followers may charge $100 per post plus additional production expenses, but ethically, if that person is posting about a product or service as part of the brand partnership, (s)he should disclose it visibly. On social platforms, partner relationships are now being referenced more explicitly, but not always. That means that people may follow influencers and try products being promoted in the posts without realizing influencers are taking fees for creating the posts.
The Federal Trade Commission (FTC) has caught on to undisclosed brand partnerships. The FTC Endorsement Guides require a “material connection” between the two parties, the paid endorser of the product or service and the brand advertiser, to be conspicuously disclosed. Social media platforms are busy rolling out branded content tools that will require tagging of a business partner where there has been an “exchange of value,” but prior to these guidelines, consumers, sometimes children, were none the wiser.
2. Online reviews provide no recourse
Online reviews are an essential part of the digital era, and social media platforms such as Facebook and Yelp are an important source of consumer reviews. According to the BrightLocal Local Consumer Review Survey 2018, 86% of consumers read reviews for local businesses, and that percentage jumps to 95% for people aged 18 to 34. The problem is that consumers know the importance of reviews, and some of them are savvy at abusing them.
For example, people who want to post a negative review frequently copy and paste the same review on as many social platforms as possible. Angry customers will put a negative review on Yelp, Facebook, and then Google My Business, a feature of the Google search engine. The business can answer the review, of course, but it can be incredibly difficult to defend oneself without being seen to disparage the reviewer, who by the way, is not always right. We recently talked with one of our customers that owns a local, 5-star rated business. They provided a retail service for a child, and afterward, the mother paid the bill and left with the boy, both quite happy. Two weeks later, the father returned with the boy to say how unhappy he was with the service that had been provided. The man proceeded to post a 1-star review on three platforms, remove a 5-star review that he had posted for the business a few months earlier, and disparage employees by name in the review.
There’s no arbitration for an online review, no “other side” of the story and with some exception, the review site often does not verify a purchase has even been made. The same BrightLocal survey says, “Negative reviews stop 40% of consumers wanting to use a business,” so the ability of consumers to post any review they would like, even if they have never purchased the product or service, needs to change. Even competitors can post a negative review using fake names; there’s nothing in place to stop them. A fair review process requires vetting–did a purchase actually take place–and some form of reasonable recourse for the business, a monumental technological challenge for both social networks and search engines.
3. Social media platforms offer no real customer service
You might imagine that as a digital marketing agency, we are working with different social media platforms each day. Facebook has a market capitalization, the value of its outstanding shares, of circa 550 billion dollars. Yet, if you have an issue, you have one preliminary option for support. You can click the round question mark button in the navigation. From there, you submit your help request online using their Report a Problem form.
As measured by its market cap, Facebook is the sixth largest company in the world. Facebook also operates Instagram, Messenger, and WhatsApp, and it is not obliged to provide any human form of customer service. Of course, neither are small businesses, but it’s hard to imagine one of the largest companies in the world operating with a Report a Problem form as the first stage of the customer service journey.
4. Social media content is now too vast to police
If you think about movies, the Motion Picture Association of America (MPAA) has a rating system for films to warn audiences about film content and its age appropriateness. Contrast the MPAA rating system to the current social media landscape which has no enforceable content guidelines. If you disagree with content posted about your business and even content that tags your business, you can appeal to Facebook to remove it. Our agency’s experience has been that those requests have been declined 100% of the time even when there is a clear pattern of abuse.
Facebook Live, a broadcasting feature available within the Facebook app, has been used to capture murders and suicides. Social media posts on many platforms are rife with profanity and hate speech. As a user, you can block people, but you have no way to actively filter newsfeed content for profanity or inappropriate imagery. I suppose that similar to the movies, you can choose not to “attend,” but really there should be a viable filter available for social media users who wish to block images of violence or profanity in the copy if they so chose. However, allowing the user to filter content would imperil the revenue model for social media networks which is dependent on users seeing ads interspersed in the newsfeed.
5. Personal data is not secure with social media companies
The revelations that came to light in the Cambridge Analytica scandal were shocking. Cambridge Analytica employees and contractors acquired the data of tens of millions of Facebook users via a Facebook data breach in 2014. This data was utilized to construct user profiles in advance of the 2016 US presidential election and effectively audience target marketing campaigns. According to The Guardian, when Facebook found out about the breach in 2015 and that individual data had been harvested, it failed to notify Facebook users that were affected. Facebook also did not work to recover the data from the breach.
In fact, the rapid growth of social media platforms over the last 15 years has meant that social media companies have not been held to the same standard as other traditional media companies and corporations in many areas, including privacy. They should be. It’s been convenient to be labeled a social media platform as if best practice for other companies does not apply. Facebook put out a recent announcement that the company anticipates a fine from the FTC of 3 to 5 billion dollars for privacy breaches and has set aside 3 billion for legal fees which reaffirm the gravity of the situation.
So, what’s wrong with social media? Ads drive the revenue model for social media companies and only work if the platforms are continuously and actively used. Otherwise, no one would see the ads. To a certain extent, questionable content attracts more users, and this phenomenon has fueled the success of companies such as Snapchat where often teens, in particular, post inappropriate content that conveniently disappears. But of course, the posts have already served their purpose and captured the attention of the audience the teen was hoping to reach. Similarly, outrageous reviews, hate speech, and online bullying attract an audience, so social media companies are not particularly incentivized to restrain them. If you haven’t done so recently, scroll through your Twitter feed and glance at the barbs traded daily.
Maturing social networks need leadership that is sensible, ethical and genuinely interested in doing what is in the public interest. Company leadership must be held accountable too, which becomes difficult when within our own legislative branch, there is such a limited understanding of the revenue model that drives social media companies. In a Joint Hearing of the Commerce and Judiciary Committees on Capitol Hill in April of last year, Senator Orrin Hatch, R-Utah, asked Mark Zuckerberg, the CEO of Facebook, “So, how do you sustain a business model in which users don’t pay for your service?” Mark Zuckerberg replied, “Senator, we run ads.” Without a broad understanding of that basic truism and how to impact it, no real behavioral change will occur by social media networks.
Perhaps not quite as grim as the Greek myth, Pandora’s Box, wherein Pandora’s curiosity gets the better of her and she unleashes all the evils of the world from a box, the exponential growth of social media has nonetheless unleashed its own form of tyranny. Only when the latest features and app updates are truly secondary to the ethical execution of a meaningful company mission will the issues caused by social media start to wane.