Please Use Mute

The country would be better served if we allow both people to speak with fewer interruptions. I’m appealing to you to do that.

Chris Wallace, Moderator 2020 Presidential Debate, 9/29/2020
USA Presidential Debate, 9/29/2020

The problem with the first presidential debate can be solved with technology. This is a rare moment when a distinctly human communication problem can be effectively solved through technological solutions.

The problem: Constant interruptions of both the moderator and the opponent during a live broadcast debate.

The solution:

Step 1: Place candidates in separate rooms or in clear glass, soundproof boxes on the same physical stage. Separate rooms requires reliable video and audio and clear boxes require reliable audio, thereby allowing the participants to hear both questions and responses.

Step 2: Make it clear at the outset that microphones will be muted when questions are asked and when the opponent has the floor.

Step 3: When it is their turn to speak open the microphone. When they are out of time mute them.

Step 4: If a live mic is in the room recording everything said but not broadcast during the debate, then those recordings may be released and commented on the following day. It must not be made available during the debate itself.

Automating the Forced Removal of Children in Poverty

Quote 1

Where the line is drawn between the routine conditions of poverty and child neglect is particularly vexing. Many struggles common among poor families are officially defined as child maltreatment, including not having enough food, having inadequate or unsafe housing, lacking medical care, or leaving a child alone while you work. Unhoused families face particularly difficult challenges holding on to their children, as the very condition of being homeless is judged neglectful.

Quote 2:

The AFST sees the use of public services as a risk to children. A quarter of the predictive variables in the AFST are direct measures of poverty: they track use of means-tested programs such as TANF, Supplemental Security Income, SNAP, and county medical assistance. Another quarter measure interaction with juvenile probation and CYF itself, systems that are disproportionately focused on poor and working-class communities, especially communities of color. The juvenile justice system struggles with many of the same racial and class inequities as the adult criminal justice system. A family’s interaction with CYF is highly dependent on social class: professional middle-class families have more privacy, interact with fewer mandated reporters, and enjoy more cultural approval of their parenting than poor or working-class families.

Quote 3:

We might call this poverty profiling. Like racial profiling, poverty profiling targets individuals for extra scrutiny based not on their behavior but rather on a personal characteristic: living in poverty. Because the model confuses parenting while poor with poor parenting, the AFST views parents who reach out to public programs as risks to their children.

Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor by Virginia Eubanks

GDPR: Search Engines and Privacy

Quote 1:

The European Court of Justice set out the general rule for these decisions in 2014: the search engine which lists results leading to information about a person must balance the individual’s right to privacy against Google’s (and the greater public’s) right to display / read publicly available information.

Quote 2:

The bigger issue though is the – almost deliberate – lack of clarity. Each person’s details need to be considered on their own merit, and a decision made based on this balance between the rights of the individual and the rights of the wider society, based on a subjective consideration of the original crime, the persons actions since and the benefit to society as a whole. This is further complicated by the fact that different rules will apply in different countries, even within the EU, as case law diverges. The result: Google is likely to face challenges if it takes anything other than a very obedient approach to those requests to be forgotten which it receives.

Google or Gone: UK Court Rules on ‘Right to be Forgotten,’ Data Protection Representatives (DPR), by Tim Bell, April 16, 2018

Phishing: Setting Traps

Lay traps: When you’ve mastered the basics above, consider setting traps for phishers, scammers and unscrupulous marketers. Some email providers — most notably Gmail — make this especially easy. When you sign up at a site that requires an email address, think of a word or phrase that represents that site for you, and then add that with a “+” sign just to the left of the “@” sign in your email address. For example, if I were signing up at example.com, I might give my email address as krebsonsecurity+example@gmail.com. Then, I simply go back to Gmail and create a folder called “Example,” along with a new filter that sends any email addressed to that variation of my address to the Example folder. That way, if anyone other than the company I gave this custom address to starts spamming or phishing it, that may be a clue that example.com shared my address with others (or that it got hacked, too!). I should note two caveats here. First, although this functionality is part of the email standard, not all email providers will recognize address variations like these. Also, many commercial Web sites freak out if they see anything other than numerals or letters, and may not permit the inclusion of a “+” sign in the email address field.

After Epsilon: Avoiding Phishing Scams & Malware, Krebs on Security, by Brian Krebs, 04/06/2011

Unintentional Insider Threat (UIT)

An unintentional insider threat is (1) a current or former employee, contractor, or business partner (2) who has or had authorized access to an organization’s network system, or data and who, (3) through action or inaction without malicious intent, (4) unwittingly causes harm or substantially increases the probability of future serious harm to the confidentiality, integrity, or availability.

Unintentional Insider Threat and Social Engineering, Insider Threat Blog, Carnegie Mellon University (CMU) Security Engineering Institute (SEI), by David Mundie, 03/31/2014

First They Came for the Poor

…one day in early 2000, I sat talking to a young mother on welfare about her experiences with technology. When our conversation turned to EBT cards, Dorothy Allen said, “They’re great. Except [Social Services] uses them as a tracking device.” I must have looked shocked, because she explained that her caseworker routinely looked at her purchase records. Poor women are the test subjects for surveillance technology, Dorothy told me. Then she added, “You should pay attention to what happens to us. You’re next.”

Dorothy’s insight was prescient. The kind of invasive electronic scrutiny she described has become commonplace across the class spectrum today.

Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor by Virginia Eubanks

Spear Phishing: Effective Because it’s Believable

Quote 1:

Spear phishing is targeted. The attackers did their research, usually through social engineering. They might already know your name or your hometown, your bank, or your place of employment—information easily accessed via social media profiles and postings. That bit of personalized information adds a lot of credibility to the email.

Spear-phishing emails work because they’re believable.

Quote 2:

Spear-phishing attacks are not trivial or conducted by random hackers. They are targeted at a specific person, often times by a specific group. Many publicly documented advanced persistent threat (APT) attack groups, including Operation Aurora and the recently publicized FIN4 group, used spear-phishing attacks to achieve their goals.

-Best Defense Against Spear Phishing, FIreEye

Quote 1:

Phishing emails are exploratory attacks in which criminals attempt to obtain victims’ sensitive data, such as personally identifiable information (PII) or network access credentials. These attacks open the door for further infiltration into any network the victim can access. Phishing typically involves both social engineering and technical trickery to deceive victims into opening attached files, clicking on embedded links and revealing sensitive information.

Spear phishing is more targeted. Cyber criminals who use spear-phishing tactics segment their victims, personalize the emails and impersonate specific senders. Their goal is to trick targets into clicking a link, opening an attachment or taking an unauthorized action. A phishing campaign may blanket an entire database of email addresses, but spear phishing targets specific individuals within specific organizations with a specific mission. By mining social networks for personal information about targets, an attacker can write emails that are extremely accurate and compelling. Once the target clicks on a link or opens an attachment, the attacker establishes a foothold in the network, enabling them to complete their illicit mission.

Quote 2:

A spear-phishing attack can display one or more of the following characteristics:

  • Blended or multi-vector threat. Spear phishing uses a blend of email spoofing, dynamic URLs and drive-by downloads to bypass traditional defenses.
  • Use of zero-day vulnerabilities. Advanced spearphishing attacks leverage zero-day vulnerabilities in browsers, plug-ins and desktop applications to compromise systems.
  • Multi-stage attack. The spear-phishing email is the first stage of a blended attack that involves further stages of malware outbound communications, binary downloads and data exfiltration.
  • Well-crafted email forgeries. Spear-phishing email threats usually target individuals, so they don’t bear much resemblance to the high-volume, broadcast spam that floods the Internet.

White Paper: Spear-Phishing Attacks, FIreEye

GDPR: Search Engines and The Right to Be Forgotten

The “right to be forgotten” rule has caused a great deal of outrage over the past four years, since the EU’s top court ruled that it applied to search engines. It states that people should be able to ask for information about them to be removed from search results, if it is “inaccurate, inadequate, irrelevant or excessive.”…The right to be forgotten, which stems from EU privacy law, is not an absolute right. It is supposed to be balanced against the public interest and other factors.

Google Occupies an Odd Role in Enforcing Privacy Laws. A Businessman’s Landmark ‘Right To Be Forgotten’ Win Just Revealed It., Fortune, by David Meyer, April 16, 2018.

GDPR: Facebook data privacy scandal

Like any organization providing services to users in European Union countries, Facebook is bound by the EU General Data Protection Regulation (GDPR). Due to the scrutiny Facebook is already facing regarding the Cambridge Analytica scandal, as well as the general nature of the social media giant’s product being personal information, its strategy for GDPR compliance is similarly receiving a great deal of focus from users and other companies looking for a model of compliance…Facebook members outside the US and Canada have heretofore been governed by the company’s terms of service in Ireland. This has reportedly been changed prior to the start of GDPR enforcement, as this would seemingly make Facebook liable for damages for users internationally, due to Ireland’s status as an EU member.

Shadow profiles” are stores of information that Facebook has obtained about other people—who are not necessarily Facebook users. The existence of “shadow profiles” was discovered as a result of a bug in 2013. When a user downloaded their Facebook history, that user would obtain not just his or her address book, but also the email addresses and phone numbers of their friends that other people had stored in their address books…Because of the way that Facebook synthesizes data in order to attribute collected data to existing profiles, data of people who do not have Facebook accounts congeals into dossiers, which are popularly called a “shadow profile.” It is unclear what other sources of input are added to said “shadow profiles,” a term that Facebook does not use, according to Zuckerberg in his Senate testimony.

Facebook data privacy scandal: A cheat sheet, Tech Republic, By James Sanders and Dan Patterson, June 14, 2018

Technology Is Not Politically Neutral

The proposed laws were impossible to obey, patently unconstitutional, and unenforceable, but that’s not the point. This is performative politics. The legislation was not intended to work; it was intended to heap stigma on social programs and reinforce the cultural narrative that those who access public assistance are criminal, lazy, spendthrift addicts…Technologies of poverty management are not neutral. They are shaped by our nation’s fear of economic insecurity and hatred of the poor; they in turn shape the politics and experience of poverty.

Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor by Virginia Eubanks