Bragging Rights: Social Media Policy Development

This course is designed to help Small Business Owners, Human Resources and Marketing Executives understand some of the legal ramifications in dealing with workplace social media issues. 

I completed The Legal Implications of Social Media in the Workplace Regulatory and Case Law Considerations for Employers’ Social Media Policy Development course on Udemy.com.

It provides a good overview of the laws most commonly relied upon in Social Media policy development.There are several case studies that provide excellent insight into the potential consequences of implementing a poorly written or unenforced policy.

For Information Security policy Analysts who have extensive experience researching and writing security policy, most of this will be review. But an examination of the basics is often useful.

Security Breach Notification Laws

The National Conference of State Legislatures (NCSL) has provided a complete list of security breach notification laws implemented at the state level (USA):

All 50 states, the District of Columbia, Guam, Puerto Rico and the Virgin Islands have enacted legislation requiring private or governmental entities to notify individuals of security breaches of information involving personally identifiable information.

This link provides links to each and every law: Security Breach Notification Laws

 

Nonpublic Personal Information (NPI)

Gramm-Leach-Bliley Act (GLBA), 15 U.S.C. § 6801-6809 (2002). Available at: https://www.law.cornell.edu/uscode/text/15/6809

(4)Nonpublic personal information
(A)The term “nonpublic personal information” means personally identifiable financial information—
(i)provided by a consumer to a financial institution;
(ii)resulting from any transaction with the consumer or any service performed for the consumer; or
(iii)otherwise obtained by the financial institution.
(B)Such term does not include publicly available information, as such term is defined by the regulations prescribed under section 6804 of this title.
(C)Notwithstanding subparagraph (B), such term—
(i)shall include any list, description, or other grouping of consumers (and publicly available information pertaining to them) that is derived using any nonpublic personal information other than publicly available information; but
(ii)shall not include any list, description, or other grouping of consumers (and publicly available information pertaining to them) that is derived without using any nonpublic personal information.

(GLBA, 15 U.S.C. § 6809(4)(B))

 

Personally Identifiable Financial Information (PIFI)

PIFI is defined in Securities and Exchange Commission (SEC), Final Rule: Privacy of Consumer Financial Information (Regulation S-P) 17 CFR Part 248 (2000). Available at: https://www.sec.gov/rules/final/34-42974.htm

Both the GLBA and the regulations define NPI[5] in terms of PIFI.
The GLBA does not define PIFI but the FTC regulations define the term to mean any information:
(i) A consumer provides to you [the financial institution] to obtain a financial product or service from you;
(ii) About a consumer resulting from any transaction involving a financial product or service between you and a consumer; or
(iii) You otherwise obtain about a consumer in connection with providing a financial product or service to that consumer.

GDPR: Search Engines and Privacy

Quote 1:

The European Court of Justice set out the general rule for these decisions in 2014: the search engine which lists results leading to information about a person must balance the individual’s right to privacy against Google’s (and the greater public’s) right to display / read publicly available information.

Quote 2:

The bigger issue though is the – almost deliberate – lack of clarity. Each person’s details need to be considered on their own merit, and a decision made based on this balance between the rights of the individual and the rights of the wider society, based on a subjective consideration of the original crime, the persons actions since and the benefit to society as a whole. This is further complicated by the fact that different rules will apply in different countries, even within the EU, as case law diverges. The result: Google is likely to face challenges if it takes anything other than a very obedient approach to those requests to be forgotten which it receives.

Google or Gone: UK Court Rules on ‘Right to be Forgotten,’ Data Protection Representatives (DPR), by Tim Bell, April 16, 2018

Phishing: Setting Traps

Lay traps: When you’ve mastered the basics above, consider setting traps for phishers, scammers and unscrupulous marketers. Some email providers — most notably Gmail — make this especially easy. When you sign up at a site that requires an email address, think of a word or phrase that represents that site for you, and then add that with a “+” sign just to the left of the “@” sign in your email address. For example, if I were signing up at example.com, I might give my email address as krebsonsecurity+example@gmail.com. Then, I simply go back to Gmail and create a folder called “Example,” along with a new filter that sends any email addressed to that variation of my address to the Example folder. That way, if anyone other than the company I gave this custom address to starts spamming or phishing it, that may be a clue that example.com shared my address with others (or that it got hacked, too!). I should note two caveats here. First, although this functionality is part of the email standard, not all email providers will recognize address variations like these. Also, many commercial Web sites freak out if they see anything other than numerals or letters, and may not permit the inclusion of a “+” sign in the email address field.

After Epsilon: Avoiding Phishing Scams & Malware, Krebs on Security, by Brian Krebs, 04/06/2011

Unintentional Insider Threat (UIT)

An unintentional insider threat is (1) a current or former employee, contractor, or business partner (2) who has or had authorized access to an organization’s network system, or data and who, (3) through action or inaction without malicious intent, (4) unwittingly causes harm or substantially increases the probability of future serious harm to the confidentiality, integrity, or availability.

Unintentional Insider Threat and Social Engineering, Insider Threat Blog, Carnegie Mellon University (CMU) Security Engineering Institute (SEI), by David Mundie, 03/31/2014

Spear Phishing: Effective Because it’s Believable

Quote 1:

Spear phishing is targeted. The attackers did their research, usually through social engineering. They might already know your name or your hometown, your bank, or your place of employment—information easily accessed via social media profiles and postings. That bit of personalized information adds a lot of credibility to the email.

Spear-phishing emails work because they’re believable.

Quote 2:

Spear-phishing attacks are not trivial or conducted by random hackers. They are targeted at a specific person, often times by a specific group. Many publicly documented advanced persistent threat (APT) attack groups, including Operation Aurora and the recently publicized FIN4 group, used spear-phishing attacks to achieve their goals.

-Best Defense Against Spear Phishing, FIreEye

Quote 1:

Phishing emails are exploratory attacks in which criminals attempt to obtain victims’ sensitive data, such as personally identifiable information (PII) or network access credentials. These attacks open the door for further infiltration into any network the victim can access. Phishing typically involves both social engineering and technical trickery to deceive victims into opening attached files, clicking on embedded links and revealing sensitive information.

Spear phishing is more targeted. Cyber criminals who use spear-phishing tactics segment their victims, personalize the emails and impersonate specific senders. Their goal is to trick targets into clicking a link, opening an attachment or taking an unauthorized action. A phishing campaign may blanket an entire database of email addresses, but spear phishing targets specific individuals within specific organizations with a specific mission. By mining social networks for personal information about targets, an attacker can write emails that are extremely accurate and compelling. Once the target clicks on a link or opens an attachment, the attacker establishes a foothold in the network, enabling them to complete their illicit mission.

Quote 2:

A spear-phishing attack can display one or more of the following characteristics:

  • Blended or multi-vector threat. Spear phishing uses a blend of email spoofing, dynamic URLs and drive-by downloads to bypass traditional defenses.
  • Use of zero-day vulnerabilities. Advanced spearphishing attacks leverage zero-day vulnerabilities in browsers, plug-ins and desktop applications to compromise systems.
  • Multi-stage attack. The spear-phishing email is the first stage of a blended attack that involves further stages of malware outbound communications, binary downloads and data exfiltration.
  • Well-crafted email forgeries. Spear-phishing email threats usually target individuals, so they don’t bear much resemblance to the high-volume, broadcast spam that floods the Internet.

White Paper: Spear-Phishing Attacks, FIreEye

GDPR: Search Engines and The Right to Be Forgotten

The “right to be forgotten” rule has caused a great deal of outrage over the past four years, since the EU’s top court ruled that it applied to search engines. It states that people should be able to ask for information about them to be removed from search results, if it is “inaccurate, inadequate, irrelevant or excessive.”…The right to be forgotten, which stems from EU privacy law, is not an absolute right. It is supposed to be balanced against the public interest and other factors.

Google Occupies an Odd Role in Enforcing Privacy Laws. A Businessman’s Landmark ‘Right To Be Forgotten’ Win Just Revealed It., Fortune, by David Meyer, April 16, 2018.

Social Engineering Technique – The Shell Game

When discussing social engineering techniques, the danger of an insider threat cannot be underestimated. The shell game technique is a subtle form of social engineering most frequently used by individuals with control over key documentation and specific collections of data. It is one of many techniques used by people who maliciously hack systems from the inside.

Establishing Trust

The trust that must be established for this technique to work is twofold, the malicious actor (hacker) must have: 1) a job title or role that provides access to the collection of data being manipulated and 2) an established reputation for being the go-to or definitive source for questions about (or access to) the data being manipulated.

For the purposes of this article, I will use the following scenario: An Information Security Manager involved in the IT Security Policy decision making processes who actively manipulates key decisions by using the shell game technique with IT Security Policy.

In this scenario, the manager is responsible for developing, managing, securing and representing IT security policy. He or she has control over the creation of the content and the data repository where it is officially maintained. The manager is also responsible for enforcing security across the company, which is based on policy, so employees are in the habit of going to the person holding this job title when copies of existing policy or answers about the interpretation of policy are needed.

Historic Shell Game

The shell game is an old standby used by con artists. It involves gambling on three cups and a ball (or discarded shells from a large nut and a pea). The con artist places the ball under one of the cups, scrambles the cups, and then challenges the target to locate the ball. At some point, a wager is placed, and the con artist secretly removes the ball from under the cups while quickly scrambling them, causing the target to lose the money gambled.

IT Shell Game

In IT security, data repositories are the cups and the data within them are the balls. It’s important to remember that ‘data repositories’ can be databases, shared drives, intranet sites, hard copies (paper) and human beings (accessing knowledge and skills of an employee or expert).

IT Security Policies are collections of key decisions. Like legal documents, they are the data referenced when determining what will or will not be done under certain circumstances. IT Security Policy covers everything from 1) how access is granted to every asset the company owns to 2) logical security requirements applied to specific data types. The person who controls these documents has significant power over decisions concerning changes that are (or are not) made to the physical and logical environment.

The hacker in our scenario has control over all IT Security Policies. The next steps are as follows:

  • He or she sabotages efforts to create company-wide transparency through a central repository, accessible to all appropriate individuals.
  • He or she keeps printed copies of old versions in a locked drawer or an untidy pile of papers on a desk.
  • He or she keeps track of the many different versions saved to different locations on the intranet.
  • He or she takes the draft version of policies being revised or developed, modifies them, formats them to look like a final copy, prints them out and saves them to the same drawer or desk pile.
  • He or she modifies policy immediately after it has been approved, without discussing the changes or acquiring additional approval, and save the modified version alongside the approved version.

This lays the groundwork for the shell game. The ‘shells’ are the many different versions and repositories established in the manager’s office and throughout the company. The ‘ball’ being chased is the final approved copy of an IT Security Policy, which is necessary when making key decisions concerning all aspects of IT security and IT development.

Acting as a malicious actor (hacker), the manager answers requests for copies of the policy by sending different people different versions. Sometimes these documents are pulled out of the messy pile on the desk or out of a drawer after the hacker makes a point of fumbling around while searching for the copy that he or she knows is “here somewhere.” Other times a link to an old copy saved on the intranet is provided or a modified electronic version is emailed out.

This is the equivalent to a shell game con artist pointing to a shell and saying, “the ball is here.” But when the shell is lifted it reveals a blue ball and the wager was specifically placed on the red ball.

Using Microsoft Word’s Compare feature to identify the differences between multiple documents would reveal the discrepancies, but that requires having a Word formatted copy of all variations. PDF files can make this comparison process difficult and PDFs created from a scan of a physical copy complicate matters even further.

Also, the individuals receiving the copy trust both the person and the job title and never stop to question the accuracy of the document.

At some point, someone may notice the differences between two or more copies and confront the hacker. This is (usually) easily handled through an apology, excuses about keeping track of things, and a copy of a version that may or may not be current or properly reviewed and approved.

This misinformation campaign has many malicious uses including (but not limited to): 1) eliminating employees who stand in the way of malicious objectives (e.g.: the employee is fired for failing to implement security requirements clearly detailed by IT Security Policy – the copy the person was not provided) and 2) reducing the security established on a specific system, which is then targeted by the hacker for clandestine modifications and ‘mistakenly’ left off the several-hundred-item list of systems available for review by external auditors.

Additional Application

This technique is a favorite among malicious actors who rely on falsifying data presented in reports. IT Security Policy is just one example of the many ways that this technique can be utilized.

Insider Threat Protection

There are a few things to consider:

  1. When managers are actively involved in distributing misinformation, particularly when that information concerns key decision-making documents, it should raise a red flag.
  2. All key decision-making documents (e.g.: legal documents, IT Security Policy, HR Policy, etc.) should be taken through a proper review and approval process before being published to a central repository accessible by all appropriate individuals.
  3. Consider establishing security controls around key decision-making documents that are similar to those placed on key financial assets. The person responsible for the accounting ledger does not have the power to write the checks due to the possibility of fraud. Similarly, a company may choose to place control of the repository housing these kinds of documents into the hands of someone who is not involved in the modification of systems or testing of IT security controls.

Industry standards dictate that IT Security Policy must be reviewed and approved by appropriate members of management on a regular basis (preferably annually) and made available to employees who require access. The additional controls listed above are examples of the kinds of measures that must be taken to prevent this form of exploitation.