Please Use Mute

The country would be better served if we allow both people to speak with fewer interruptions. I’m appealing to you to do that.

Chris Wallace, Moderator 2020 Presidential Debate, 9/29/2020
USA Presidential Debate, 9/29/2020

The problem with the first presidential debate can be solved with technology. This is a rare moment when a distinctly human communication problem can be effectively solved through technological solutions.

The problem: Constant interruptions of both the moderator and the opponent during a live broadcast debate.

The solution:

Step 1: Place candidates in separate rooms or in clear glass, soundproof boxes on the same physical stage. Separate rooms requires reliable video and audio and clear boxes require reliable audio, thereby allowing the participants to hear both questions and responses.

Step 2: Make it clear at the outset that microphones will be muted when questions are asked and when the opponent has the floor.

Step 3: When it is their turn to speak open the microphone. When they are out of time mute them.

Step 4: If a live mic is in the room recording everything said but not broadcast during the debate, then those recordings may be released and commented on the following day. It must not be made available during the debate itself.

Security Breach Notification Laws

The National Conference of State Legislatures (NCSL) has provided a complete list of security breach notification laws implemented at the state level (USA):

All 50 states, the District of Columbia, Guam, Puerto Rico and the Virgin Islands have enacted legislation requiring private or governmental entities to notify individuals of security breaches of information involving personally identifiable information.

This link provides links to each and every law: Security Breach Notification Laws

 

Nonpublic Personal Information (NPI)

Gramm-Leach-Bliley Act (GLBA), 15 U.S.C. § 6801-6809 (2002). Available at: https://www.law.cornell.edu/uscode/text/15/6809

(4)Nonpublic personal information
(A)The term “nonpublic personal information” means personally identifiable financial information—
(i)provided by a consumer to a financial institution;
(ii)resulting from any transaction with the consumer or any service performed for the consumer; or
(iii)otherwise obtained by the financial institution.
(B)Such term does not include publicly available information, as such term is defined by the regulations prescribed under section 6804 of this title.
(C)Notwithstanding subparagraph (B), such term—
(i)shall include any list, description, or other grouping of consumers (and publicly available information pertaining to them) that is derived using any nonpublic personal information other than publicly available information; but
(ii)shall not include any list, description, or other grouping of consumers (and publicly available information pertaining to them) that is derived without using any nonpublic personal information.

(GLBA, 15 U.S.C. § 6809(4)(B))

 

Personally Identifiable Financial Information (PIFI)

PIFI is defined in Securities and Exchange Commission (SEC), Final Rule: Privacy of Consumer Financial Information (Regulation S-P) 17 CFR Part 248 (2000). Available at: https://www.sec.gov/rules/final/34-42974.htm

Both the GLBA and the regulations define NPI[5] in terms of PIFI.
The GLBA does not define PIFI but the FTC regulations define the term to mean any information:
(i) A consumer provides to you [the financial institution] to obtain a financial product or service from you;
(ii) About a consumer resulting from any transaction involving a financial product or service between you and a consumer; or
(iii) You otherwise obtain about a consumer in connection with providing a financial product or service to that consumer.

Automating the Forced Removal of Children in Poverty

Quote 1

Where the line is drawn between the routine conditions of poverty and child neglect is particularly vexing. Many struggles common among poor families are officially defined as child maltreatment, including not having enough food, having inadequate or unsafe housing, lacking medical care, or leaving a child alone while you work. Unhoused families face particularly difficult challenges holding on to their children, as the very condition of being homeless is judged neglectful.

Quote 2:

The AFST sees the use of public services as a risk to children. A quarter of the predictive variables in the AFST are direct measures of poverty: they track use of means-tested programs such as TANF, Supplemental Security Income, SNAP, and county medical assistance. Another quarter measure interaction with juvenile probation and CYF itself, systems that are disproportionately focused on poor and working-class communities, especially communities of color. The juvenile justice system struggles with many of the same racial and class inequities as the adult criminal justice system. A family’s interaction with CYF is highly dependent on social class: professional middle-class families have more privacy, interact with fewer mandated reporters, and enjoy more cultural approval of their parenting than poor or working-class families.

Quote 3:

We might call this poverty profiling. Like racial profiling, poverty profiling targets individuals for extra scrutiny based not on their behavior but rather on a personal characteristic: living in poverty. Because the model confuses parenting while poor with poor parenting, the AFST views parents who reach out to public programs as risks to their children.

Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor by Virginia Eubanks

GDPR: Search Engines and Privacy

Quote 1:

The European Court of Justice set out the general rule for these decisions in 2014: the search engine which lists results leading to information about a person must balance the individual’s right to privacy against Google’s (and the greater public’s) right to display / read publicly available information.

Quote 2:

The bigger issue though is the – almost deliberate – lack of clarity. Each person’s details need to be considered on their own merit, and a decision made based on this balance between the rights of the individual and the rights of the wider society, based on a subjective consideration of the original crime, the persons actions since and the benefit to society as a whole. This is further complicated by the fact that different rules will apply in different countries, even within the EU, as case law diverges. The result: Google is likely to face challenges if it takes anything other than a very obedient approach to those requests to be forgotten which it receives.

Google or Gone: UK Court Rules on ‘Right to be Forgotten,’ Data Protection Representatives (DPR), by Tim Bell, April 16, 2018

Phishing: Setting Traps

Lay traps: When you’ve mastered the basics above, consider setting traps for phishers, scammers and unscrupulous marketers. Some email providers — most notably Gmail — make this especially easy. When you sign up at a site that requires an email address, think of a word or phrase that represents that site for you, and then add that with a “+” sign just to the left of the “@” sign in your email address. For example, if I were signing up at example.com, I might give my email address as krebsonsecurity+example@gmail.com. Then, I simply go back to Gmail and create a folder called “Example,” along with a new filter that sends any email addressed to that variation of my address to the Example folder. That way, if anyone other than the company I gave this custom address to starts spamming or phishing it, that may be a clue that example.com shared my address with others (or that it got hacked, too!). I should note two caveats here. First, although this functionality is part of the email standard, not all email providers will recognize address variations like these. Also, many commercial Web sites freak out if they see anything other than numerals or letters, and may not permit the inclusion of a “+” sign in the email address field.

After Epsilon: Avoiding Phishing Scams & Malware, Krebs on Security, by Brian Krebs, 04/06/2011

Unintentional Insider Threat (UIT)

An unintentional insider threat is (1) a current or former employee, contractor, or business partner (2) who has or had authorized access to an organization’s network system, or data and who, (3) through action or inaction without malicious intent, (4) unwittingly causes harm or substantially increases the probability of future serious harm to the confidentiality, integrity, or availability.

Unintentional Insider Threat and Social Engineering, Insider Threat Blog, Carnegie Mellon University (CMU) Security Engineering Institute (SEI), by David Mundie, 03/31/2014

Phishing: Establishing an Effective Defense

Quote 1:

…it’s unrealistic to expect every single user to avoid falling victim to the attack. User education may not be an effective preventative measure against this kind of phishing. Education can, however, be effective for encouraging users to report phishing emails. A well-designed incident response plan can help mitigate the impact of attacks.

Quote 2:

  • Defense 1 – Filter emails at the gateway. The first step stops as many malicious emails as possible from reaching users’ inboxes….

  • Defense 2 – Implement host-based controls. Host-based controls can stop phishing payloads that make it to the end user from running. Basic host-based controls include using antivirus and host-based firewalls…

  • Defense 3 – Implement outbound filtering. Outbound filtering is one of the most significant steps you can take to defend your organization’s network. With proper outbound filtering, attacks that circumvent all other controls can still be stopped…

Defending Against Phishing, Insider Threat Blog, Carnegie Mellon University (CMU) Security Engineering Institute (SEI), by Michael J. Albrethsen, 12/16/2016

First They Came for the Poor

…one day in early 2000, I sat talking to a young mother on welfare about her experiences with technology. When our conversation turned to EBT cards, Dorothy Allen said, “They’re great. Except [Social Services] uses them as a tracking device.” I must have looked shocked, because she explained that her caseworker routinely looked at her purchase records. Poor women are the test subjects for surveillance technology, Dorothy told me. Then she added, “You should pay attention to what happens to us. You’re next.”

Dorothy’s insight was prescient. The kind of invasive electronic scrutiny she described has become commonplace across the class spectrum today.

Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor by Virginia Eubanks