Phishing: Setting Traps

Lay traps: When you’ve mastered the basics above, consider setting traps for phishers, scammers and unscrupulous marketers. Some email providers — most notably Gmail — make this especially easy. When you sign up at a site that requires an email address, think of a word or phrase that represents that site for you, and then add that with a “+” sign just to the left of the “@” sign in your email address. For example, if I were signing up at example.com, I might give my email address as krebsonsecurity+example@gmail.com. Then, I simply go back to Gmail and create a folder called “Example,” along with a new filter that sends any email addressed to that variation of my address to the Example folder. That way, if anyone other than the company I gave this custom address to starts spamming or phishing it, that may be a clue that example.com shared my address with others (or that it got hacked, too!). I should note two caveats here. First, although this functionality is part of the email standard, not all email providers will recognize address variations like these. Also, many commercial Web sites freak out if they see anything other than numerals or letters, and may not permit the inclusion of a “+” sign in the email address field.

After Epsilon: Avoiding Phishing Scams & Malware, Krebs on Security, by Brian Krebs, 04/06/2011

Unintentional Insider Threat (UIT)

An unintentional insider threat is (1) a current or former employee, contractor, or business partner (2) who has or had authorized access to an organization’s network system, or data and who, (3) through action or inaction without malicious intent, (4) unwittingly causes harm or substantially increases the probability of future serious harm to the confidentiality, integrity, or availability.

Unintentional Insider Threat and Social Engineering, Insider Threat Blog, Carnegie Mellon University (CMU) Security Engineering Institute (SEI), by David Mundie, 03/31/2014

Stalking and Social Engineering: Wheel-of-Slander

Abusive individuals, stalkers and criminals specializing in destroying or selling human beings (e.g., human traffickers, pimps, etc.) all have one primary goal: isolate the target. The wheel-of-slander is one of many methods commonly used to isolate a victim while simultaneously convincing other people that the victim ‘deserves’ whatever horrible crimes the stalker or criminal chooses to perpetrate.

Establishing Trust

This technique does not require establishing a level of trust. It only requires identifying established gossips and their hot-button topics.  A period of observation and casual interaction is usually sufficient.

Initiating the Gossip

The stalker approaches the gossip with ‘news’ about the target, who just happens to be [hot button issue]. The stalker purposely crafts an enticing story, specifically designed to get the gossip emotionally involved in attacking the target. The story leaves out all concrete evidence, details about the stalker and the source(s) for the ‘facts’ provided. Instead, ‘proof’ is provided in common everyday actions and interactions, such as: the way the person walks or speaks, the type of clothes they wear, their physical address or even the color of their eyes/skin/hair.

The less concrete or valid the evidence, the more effective the gossiping campaign. This is because the people who enjoy verbally attacking another person (just for fun) will jump in and elaborate, while individuals who are more naïve will begin to believe that these things truly are concrete proof of [hot button issue]. Sadly, the individuals who see through this game will often remain silent and watch it play out from a distance, out of fear of becoming a target themselves.

Wheel-of-Slander

After the first gossip has been inspired to act, the stalker locates the second gossip with a different hot-button issue and proceeds to create an equally fictitious story about the target. This second gossip proceeds to spread vicious rumors with loose (at best) and completely irrelevant (at worse) indicators of ‘proof’ that the target is [second hot button topic].

Sadly, most people will not consider how highly improbable it is for multiple extreme accusations levied at a single individual to contain any amount of verifiable truth. In fact, the accusations could completely contradict one another, and the crowd-response will usually consist of a poorly defined sense of fear and revulsion that can best be defined as this is a bad and dangerous person – stay away.

Common slanderous accusations used during a Wheel-of-Slander assault in the United States:

  1. Abuser (e.g., Child, Animal, etc.)
  2. Criminal Activity (e.g., They claim to be trustworthy, but they are really [hot button issue] – they just haven’t been caught yet)
  3. Cultural Heritage (e.g., They claim to be X, but they are really Y)
  4. Dating or ‘Interest’: (e.g., They claim to be single or in a relationship, but they are really dating or trying to date [hot button issue])
  5. Drug or Alcohol Addiction (e.g., They deny it, but they are really getting high/drunk in secret – they make sure no one sees them buying or using the stuff.)
  6. Hate Group Association (e.g., They deny it, but they are really a member of [hate group])
  7. Mental Illness (It’s important to note that ‘crazy’ never has to be proven, it only needs to be stated. Most people will believe another person is ‘crazy’ based on rumor alone.)
  8. Physical Illness (stigmatizing)
  9. Political Affiliations or Beliefs
  10. Racial Heritage (e.g., They look [race], but they are really [race])
  11. Secret Religion (e.g., They claim to be X, but they are really Y)
  12. Sexual Identity (e.g., They claim to be X, but they are really Y)
  13. Stalking (e.g., They claim to be dealing with a stalker, but they are really the stalker themselves.)
  14. Witchcraft (It’s important to note that the beliefs behind the Salem Witch Trials perpetuate in the present day – people actually believe witches are real and must be eliminated through lynching.)

(This list could contain hundreds of examples, but you get the idea.)

Exercise: Randomly select four (4) numbers and pull those items off the above list. Put that list together into a single description. Imagine being the victim and trying to address any one of these assaults. How would you make sense of what people are saying and why? Now try to imagine creating a method for addressing the problem. Where do you go? Who do you confront? Who do you sue for slander?

Spotting Manipulation

Gossip is never factual. People who regularly participate in gossip do so for the thrill of destroying another human being. Therefore, gossips are inherently unethical and untrustworthy individuals. It is important to learn to recognize when this behavior is occurring and call it out for what it is.

Facts are verifiable. Human beings are creatures of habit, and most people say and do things that are logical – or, at least, follow a well-defined pattern. This makes fact-finding reasonably easy – as long as the person researching the facts is sincerely looking for FACTS instead of ‘proof’ for what they’ve already decided to be true.

  • Always question gossip.
  • Always question inflammatory statements.
  • Always question ‘facts’ provided without clear or verifiable proof.

Phishing: Establishing an Effective Defense

Quote 1:

…it’s unrealistic to expect every single user to avoid falling victim to the attack. User education may not be an effective preventative measure against this kind of phishing. Education can, however, be effective for encouraging users to report phishing emails. A well-designed incident response plan can help mitigate the impact of attacks.

Quote 2:

  • Defense 1 – Filter emails at the gateway. The first step stops as many malicious emails as possible from reaching users’ inboxes….

  • Defense 2 – Implement host-based controls. Host-based controls can stop phishing payloads that make it to the end user from running. Basic host-based controls include using antivirus and host-based firewalls…

  • Defense 3 – Implement outbound filtering. Outbound filtering is one of the most significant steps you can take to defend your organization’s network. With proper outbound filtering, attacks that circumvent all other controls can still be stopped…

Defending Against Phishing, Insider Threat Blog, Carnegie Mellon University (CMU) Security Engineering Institute (SEI), by Michael J. Albrethsen, 12/16/2016

Spear Phishing: Effective Because it’s Believable

Quote 1:

Spear phishing is targeted. The attackers did their research, usually through social engineering. They might already know your name or your hometown, your bank, or your place of employment—information easily accessed via social media profiles and postings. That bit of personalized information adds a lot of credibility to the email.

Spear-phishing emails work because they’re believable.

Quote 2:

Spear-phishing attacks are not trivial or conducted by random hackers. They are targeted at a specific person, often times by a specific group. Many publicly documented advanced persistent threat (APT) attack groups, including Operation Aurora and the recently publicized FIN4 group, used spear-phishing attacks to achieve their goals.

-Best Defense Against Spear Phishing, FIreEye

Quote 1:

Phishing emails are exploratory attacks in which criminals attempt to obtain victims’ sensitive data, such as personally identifiable information (PII) or network access credentials. These attacks open the door for further infiltration into any network the victim can access. Phishing typically involves both social engineering and technical trickery to deceive victims into opening attached files, clicking on embedded links and revealing sensitive information.

Spear phishing is more targeted. Cyber criminals who use spear-phishing tactics segment their victims, personalize the emails and impersonate specific senders. Their goal is to trick targets into clicking a link, opening an attachment or taking an unauthorized action. A phishing campaign may blanket an entire database of email addresses, but spear phishing targets specific individuals within specific organizations with a specific mission. By mining social networks for personal information about targets, an attacker can write emails that are extremely accurate and compelling. Once the target clicks on a link or opens an attachment, the attacker establishes a foothold in the network, enabling them to complete their illicit mission.

Quote 2:

A spear-phishing attack can display one or more of the following characteristics:

  • Blended or multi-vector threat. Spear phishing uses a blend of email spoofing, dynamic URLs and drive-by downloads to bypass traditional defenses.
  • Use of zero-day vulnerabilities. Advanced spearphishing attacks leverage zero-day vulnerabilities in browsers, plug-ins and desktop applications to compromise systems.
  • Multi-stage attack. The spear-phishing email is the first stage of a blended attack that involves further stages of malware outbound communications, binary downloads and data exfiltration.
  • Well-crafted email forgeries. Spear-phishing email threats usually target individuals, so they don’t bear much resemblance to the high-volume, broadcast spam that floods the Internet.

White Paper: Spear-Phishing Attacks, FIreEye

Social Engineering Technique – The Shell Game

When discussing social engineering techniques, the danger of an insider threat cannot be underestimated. The shell game technique is a subtle form of social engineering most frequently used by individuals with control over key documentation and specific collections of data. It is one of many techniques used by people who maliciously hack systems from the inside.

Establishing Trust

The trust that must be established for this technique to work is twofold, the malicious actor (hacker) must have: 1) a job title or role that provides access to the collection of data being manipulated and 2) an established reputation for being the go-to or definitive source for questions about (or access to) the data being manipulated.

For the purposes of this article, I will use the following scenario: An Information Security Manager involved in the IT Security Policy decision making processes who actively manipulates key decisions by using the shell game technique with IT Security Policy.

In this scenario, the manager is responsible for developing, managing, securing and representing IT security policy. He or she has control over the creation of the content and the data repository where it is officially maintained. The manager is also responsible for enforcing security across the company, which is based on policy, so employees are in the habit of going to the person holding this job title when copies of existing policy or answers about the interpretation of policy are needed.

Historic Shell Game

The shell game is an old standby used by con artists. It involves gambling on three cups and a ball (or discarded shells from a large nut and a pea). The con artist places the ball under one of the cups, scrambles the cups, and then challenges the target to locate the ball. At some point, a wager is placed, and the con artist secretly removes the ball from under the cups while quickly scrambling them, causing the target to lose the money gambled.

IT Shell Game

In IT security, data repositories are the cups and the data within them are the balls. It’s important to remember that ‘data repositories’ can be databases, shared drives, intranet sites, hard copies (paper) and human beings (accessing knowledge and skills of an employee or expert).

IT Security Policies are collections of key decisions. Like legal documents, they are the data referenced when determining what will or will not be done under certain circumstances. IT Security Policy covers everything from 1) how access is granted to every asset the company owns to 2) logical security requirements applied to specific data types. The person who controls these documents has significant power over decisions concerning changes that are (or are not) made to the physical and logical environment.

The hacker in our scenario has control over all IT Security Policies. The next steps are as follows:

  • He or she sabotages efforts to create company-wide transparency through a central repository, accessible to all appropriate individuals.
  • He or she keeps printed copies of old versions in a locked drawer or an untidy pile of papers on a desk.
  • He or she keeps track of the many different versions saved to different locations on the intranet.
  • He or she takes the draft version of policies being revised or developed, modifies them, formats them to look like a final copy, prints them out and saves them to the same drawer or desk pile.
  • He or she modifies policy immediately after it has been approved, without discussing the changes or acquiring additional approval, and save the modified version alongside the approved version.

This lays the groundwork for the shell game. The ‘shells’ are the many different versions and repositories established in the manager’s office and throughout the company. The ‘ball’ being chased is the final approved copy of an IT Security Policy, which is necessary when making key decisions concerning all aspects of IT security and IT development.

Acting as a malicious actor (hacker), the manager answers requests for copies of the policy by sending different people different versions. Sometimes these documents are pulled out of the messy pile on the desk or out of a drawer after the hacker makes a point of fumbling around while searching for the copy that he or she knows is “here somewhere.” Other times a link to an old copy saved on the intranet is provided or a modified electronic version is emailed out.

This is the equivalent to a shell game con artist pointing to a shell and saying, “the ball is here.” But when the shell is lifted it reveals a blue ball and the wager was specifically placed on the red ball.

Using Microsoft Word’s Compare feature to identify the differences between multiple documents would reveal the discrepancies, but that requires having a Word formatted copy of all variations. PDF files can make this comparison process difficult and PDFs created from a scan of a physical copy complicate matters even further.

Also, the individuals receiving the copy trust both the person and the job title and never stop to question the accuracy of the document.

At some point, someone may notice the differences between two or more copies and confront the hacker. This is (usually) easily handled through an apology, excuses about keeping track of things, and a copy of a version that may or may not be current or properly reviewed and approved.

This misinformation campaign has many malicious uses including (but not limited to): 1) eliminating employees who stand in the way of malicious objectives (e.g.: the employee is fired for failing to implement security requirements clearly detailed by IT Security Policy – the copy the person was not provided) and 2) reducing the security established on a specific system, which is then targeted by the hacker for clandestine modifications and ‘mistakenly’ left off the several-hundred-item list of systems available for review by external auditors.

Additional Application

This technique is a favorite among malicious actors who rely on falsifying data presented in reports. IT Security Policy is just one example of the many ways that this technique can be utilized.

Insider Threat Protection

There are a few things to consider:

  1. When managers are actively involved in distributing misinformation, particularly when that information concerns key decision-making documents, it should raise a red flag.
  2. All key decision-making documents (e.g.: legal documents, IT Security Policy, HR Policy, etc.) should be taken through a proper review and approval process before being published to a central repository accessible by all appropriate individuals.
  3. Consider establishing security controls around key decision-making documents that are similar to those placed on key financial assets. The person responsible for the accounting ledger does not have the power to write the checks due to the possibility of fraud. Similarly, a company may choose to place control of the repository housing these kinds of documents into the hands of someone who is not involved in the modification of systems or testing of IT security controls.

Industry standards dictate that IT Security Policy must be reviewed and approved by appropriate members of management on a regular basis (preferably annually) and made available to employees who require access. The additional controls listed above are examples of the kinds of measures that must be taken to prevent this form of exploitation.

Social Engineering Technique – The Idiot

When information security professionals discuss social engineering techniques, the conversation tends to revolve around outsiders attempting to gain access to information or physical assets – this is a serious and ever-present threat that must be appropriately addressed. However, a significant majority of breaches are caused by insider threats. This social engineering technique is one of many used by people who hack systems from the inside.

Establishing Trust

All insider jobs involve a period of establishing trust. When an employee completes work and participates in projects, the relative level of skill and competence is a key aspect to the amount of trust extended to that person. It’s also an important piece of data used to determine the relative threat a specific individual presents to an organization. For example:

  • Employee ABC: Is a programmer working on a development team with privileged access to multiple systems. ABC has consistently delivered high quality work, actively participated in complicated troubleshooting and is known for identifying highly effective ‘out of the box’ solutions.
  • Employee 123: Is a programmer working on a development team with privileged access to multiple systems. 123 has consistently delivered substandard work, which has resulted in multiple discussions with management about improving job performance. 123 frequently asks others for help in completing basic daily tasks and is known around the company as being lazy and unfocused. 123 has somehow managed to perform well enough to remain employed.

Both employees have the same level of logical access and physical access. When evaluating the risk of programmers as insider threats, it would be assumed that Employee ABC could caused significantly more damage than Employee 123. From a purely technical perspective ABC is a higher risk.

Employee 123 has established the reputation for being incompetent and lazy, which creates a perception of inability. While 123 has the logical access to do significant damage, it is assumed this individual lacks the technical skills to pursue any kind of advanced programming or clandestine activity.

By acting like an incompetent and lazy employee, 123 has established the trust necessary to act as a significant insider threat.

Getting Fired

After the hacker has completed the tasks necessary to achieve the desired goal, the next step is to make a professional mistake or participate in an activity that results in leaving the company. This could be a technical error that sets back a project by several months, a loud and profanity-laced argument with a member of management or some other drama that further solidifies the commonly held opinion that this individual is an idiot.

Reduced Perception of Risk

After this person is let go, the usual termination procedures are followed, and access is removed in a timely manner. Given the perception of this individual as incompetent, it is human nature to assume nothing more needs to be examined or addressed because it is not possible for this person to successfully modify information systems and assets without getting caught. This assumption is what the hacker is counting on because a more in-depth and careful examination of the systems would reveal multiple highly sophisticated modifications, resulting in a steady breach (or potential future breach) of data and resources.

Insider Threat Protection

While employed: If an individual is skilled and savvy enough to get through all the degrees, certifications, skills tests and interviews required for the job, then their transformation into an incompetent idiot is worthy of attention from management. It’s possible the individual truly is lazy and difficult to work with. It’s also possible that reputation is being actively established to cover their tracks. Either way. It’s worthy of investigation and monitoring. Some questions to consider:

  • Does the employee work at odd hours?
  • Does the employee attempt to access areas that are not necessary or appropriate for the job?
  • Does the employee manage to get around a thorough review of work at any point in the process?
  • Does the employee spend time ‘bothering’ people with higher, complimentary or different privileged access rights?
  • Does the employee spend a lot of time ‘playing with’ their phone?
  • Does the employee regularly bring privately owned equipment (e.g.: USB drives) to work?

While none of these activities, by themselves, is proof of malicious activity, they are worthy of note. An in-depth review of access, completed work and other activities may be warranted.

After termination: Performing thorough risk prevention and proper termination procedures across all systems and all employees are the best protection against this kind of threat. Never assume that a specific measure is unnecessary because the individual in question is perceived as being incompetent.