Edward Qiu

Computer Science Student

UC Berkeley CS 161 Computer Security Fall 2018 - Lecture 1 & 2: Introduction & Security Principles Notes

Disclaimer: These notes are not comprehensive. I only jotted down what was useful for me, with some of my personal flair ;)

What is security?

  • How can we enforce a desired property in the presence of an attacker?
  • Some properties to consider:
    • Data confidentiality - only allow access to data for which the user is permitted
    • Data and computation integrity - ensure data is not tampered or altered by unauthorized users
    • Availability - ensure systems and data are available to authorized users when they need it
      • Example of an attack on availability: DDoS
    • User privacy
    • Authentication
  • Security is fundamentally about people
  • “People often buy the illusion of security, not security itself, because they often have no way of knowing whether something is secure or not”
    • Example: PCIe compliance
  • When assessing security, define what is the Trusted Computing Base (TCB)

Attacker Mindset

  • “Everything is hackable”
  • “There is no patch for human stupidity” - Social engineering is usually an organization’s weakest link
  • Breaking your own things is a great way to evaluate systems

Ethics

  • Generally okay to break you own stuff, but consent is necessary to attack other people’s assets

Threat Modeling

  • Example:
    • If the assumption is, the attacker is rational. Then, we can defer attackers by making the benefit of the attack < expected cost of the attack
  • To defend, one must learn the offense
  • How can we separate users from attackers?
  • People attack systems for a reason. Find that reason. Attack the reason.
  • Who might attack you?
  • Why might someone attack you?
  • What resources might the attacker have?
  • How much time does the attacker have?
  • “Bear race”:
    • You and a friend see a bear. You don’t need to outrun the bear, you just need to outrun your friend
    • Similar idea to securing your bike, you just need to make sure your bike is more secure than the ones next to it.
    • General idea is you just have to be more secure than your neighbor (other targets)
    • “Bear race” does not exist if the attacker is spear phishing
  • Ideal rating system would be similar to TL safe ratings for combination lock safes

Common Assumptions for Threat Modeling

  • Attackers can interact with our system with a lot of noise
    • The internet is noisy
  • Attackers can find easy information
    • Passive recon and OSINT
  • Attackers can obtain access to a copy of the given system
    • Shannon’s Maxim
  • Attackers have clever ways to automate
    • Automation destroys detection and response model if the responses are human scale
    • Attackers can get lucky
  • Attackers can collaborate
  • Attackers can bring large resources (computing resources, people, etc)
  • Anything that helps an attacker, assume they have privileges to it
    • Example: assuming attacker already has user privilege, how might they privilege escalate?
  • Key infrastructure systems are well protected
  • A system with no apparent value, it may be used by the attacker for lateral movement
  • Attackers are not afraid to be caught

Intimate Partner Threat (IPT)

  • For the average joe, an IPT is the most dangerous attacker
  • IPTs have physical access, intimate knowledge and easy route to social engineering
  • Karen Levy on IPT
  • A good problem that needs solving and has huge impact

Authentication

  • Something you know (Password, security questions)
  • Something you have (RSA token)
  • Something you are (fingerprint)
  • Multifactor Auth:
    • U2F - “When you first add the key to your account, your key generates a random number, which is called a nonce. It uses a secure hash function (remember those?!) to mix this with the domain of the website you are on (e.g. www.example.com) and a secret key, which never leaves the device, to generate a unique private key for your account.”
      • Decent at stopping phishing

Principles

  • “Security is economics”
    • More security costs more
    • Standards often define “security”
    • “You don’t put a $10 lock on a $1 rock, unless an attacker can leverage that $1 rock to attack something more important”
    • Cost/benefit analysis is required to assess threats and prioritize by impact
    • KISS - “Keep It Simple, Stupid” allows easy maintenance, fewer assumptions and mistakes.
  • “Psychological acceptability”
    • Don’t blame the user
    • If a security system is unusable, it will not be unused
    • Will users try to subvert the system?
      • How can we design a system that is easy to use for users, but also reasonable secure?
  • “Consider human factors”
    • Programmer mistakes are often tied to the tools and languages they use
  • “Detect and respond if you can’t protect”
    • If prevention of an attack is not possible, then detection and response is next best model to consider.
    • Detection fails when there are:
      • False positive - “false alarm” - Alert when there is no threat
      • False negative - Don’t alert when there is a threat
    • The problem is false positive are costly and false negatives means failure.
    • Example: Physical safes that are designed to fail-closed - if the safe detects tampering, it will make itself harder to open
    • Prevention is always better than detection and response, even when detection and response is easy
  • “Fail-safe defaults”
    • Unless an entity is given explicit access, it should be denied access by default.
    • Fail shut/Fail open depends on the situation
  • “Defense in Depth”
    • Like a medieval castle - layers of defense
    • The idea is that the attacker must breach all defenses to gain something valuable
    • The problem is defense in depth magnifies the cost of a false positive for a detect and response system at each layer, but there is a reduction in false negatives
  • “Mitigation and recovery”
    • Assumption: We will inevitably be breached
    • What is our plan for when we get breached?
  • “Least privilege”
    • Entities should only be given enough privileges to complete a task
    • Define Trusted Computing Base (TCB)
      • Requirements:
        • Verifiable - ability to check if it is correct
        • Complete meditation - cannot be bypassed
        • Secure - cannot be tampered with
      • Goal is KISS - “Keep It Simple, Stupid”
    • Users need to know whether or not they are on the “trusted path” - this is “mutual authentication”/”bidirection authentication”
  • “Complete mediation”
  • “Privilege separation”
  • Separation of responsibilities/duties
  • Secure the weakest link
    • You are only as strong as your weakest link
  • “Don’t rely on security through obscurity”

Questions

  • What is a memory and type safe language?
    • Type safe - A language free from type errors. A type error is erroneous or undesirable program behaviour caused by a discrepancy between differing data types for the program’s constants, variables, and methods (functions), e.g., treating an integer (int) as a floating-point number (float)
    • Memory safe - A memory safe language has no memory access errors, undefined memory access and no memory leaks during execution

Sources

Official Lecture 1 Slides

Official Lecture 2 Slides

Principles for Building Secure Systems Design Patterns for Building Secure Systems

Similar Posts