CIA Triad · Security Principles · Compliance · Privacy by Design · Data Classification
Study with Practice Tests →Domains 1 & 2 of the CSSLP CBK (~24% combined). These foundational domains establish the security mindset every secure software professional must apply throughout the entire software lifecycle.
| Domain | Weight | Key Topics |
|---|---|---|
| 1. Secure Software Concepts | 10% | CIA triad, security principles, risk, trust models |
| 2. Secure Software Requirements | 14% | Requirements gathering, privacy, compliance, abuse cases |
| 3. Secure Software Design | 14% | Threat modeling, STRIDE, security architecture |
| 4. Secure Software Implementation | 14% | Secure coding, common vulns, code review |
| 5. Secure Software Testing | 14% | SAST, DAST, IAST, fuzzing, pen testing |
| 6. Secure Software Lifecycle Management | 11% | SDLC integration, DevSecOps, metrics |
| 7. Secure Software Deployment & Ops | 11% | Deployment, configuration, patch management, IR |
| 8. Secure Software Supply Chain | 12% | Third-party risk, SBOM, CI/CD security |
Confidentiality: Only authorized parties can access data (encryption, access controls).
Integrity: Data is accurate and unmodified (hashing, digital signatures).
Availability: Systems accessible when needed (redundancy, DDoS protection).
Least Privilege: Minimum access needed.
Defense in Depth: Multiple overlapping layers.
Fail Secure: Deny on failure.
Economy of Mechanism: Keep it simple.
Separation of Duties: No single point of control.
Security requirements must be defined before design begins. Sources: business objectives, legal/regulatory mandates, use cases + abuse cases, stakeholder interviews, security standards (OWASP ASVS, ISO 27001), and threat modeling outputs.
7 Foundational Principles: Proactive not Reactive · Privacy as Default · Embedded into Design · Full Functionality · End-to-End Security · Visibility/Transparency · Respect for User Privacy. Privacy must be built in — not bolted on.
Determines security controls applied. Typical levels: Public → Internal → Confidential → Restricted/Top Secret. Classification drives: encryption requirements, access control stringency, retention policies, and disposal methods.
The inverse of use cases. Describe how a malicious actor could misuse the system. Used to derive security requirements from attacker perspective. More effective than simple use cases for identifying security gaps early in the SDLC.
The CSSLP loves asking which principle applies to a given scenario. "Economy of mechanism" = keep security designs simple. "Complete mediation" = check every access, every time. "Psychological acceptability" = security mechanisms shouldn't make the system harder to use than without them.
Security requirements must be gathered at the start of the SDLC. Adding security after design = bolt-on security = expensive and less effective. The exam will try to trick you into thinking security can be retrofitted — it cannot effectively be.
CIA = data security properties (Confidentiality, Integrity, Availability). AAA = access control framework (Authentication, Authorization, Accounting/Auditing). Both are foundational — know which applies to which scenario.
Domain 1 (~10%) establishes the foundational security vocabulary and principles that underpin every other CSSLP domain.
| Property | Definition | Threats | Controls |
|---|---|---|---|
| Confidentiality | Preventing unauthorized disclosure of information | Eavesdropping, data breach, shoulder surfing | Encryption, access controls, data masking |
| Integrity | Ensuring data accuracy and preventing unauthorized modification | Tampering, man-in-the-middle, SQL injection | Hashing (SHA-256), digital signatures, input validation |
| Availability | Ensuring systems and data are accessible when needed | DDoS, ransomware, hardware failure | Redundancy, load balancing, backups, DDoS mitigation |
Proving identity. Factors: Something you know (password), Something you have (token/OTP), Something you are (biometric), Somewhere you are (location). MFA combines two or more.
Determining what an authenticated identity is permitted to do. Models: RBAC (Role-Based), ABAC (Attribute-Based), DAC (Discretionary), MAC (Mandatory). Principle: deny by default, grant explicitly.
Recording what authenticated and authorized users actually did. Enables forensics, compliance, anomaly detection, and non-repudiation. Logs must be tamper-evident (append-only, integrity protected).
| Principle | Definition | Example Application |
|---|---|---|
| Least Privilege | Grant minimum access rights needed for a function | Database user has SELECT only, not DROP TABLE |
| Separation of Duties | No single person/process controls a critical function end-to-end | Developer cannot deploy to production |
| Defense in Depth | Multiple layered security controls; no single point of failure | WAF + input validation + parameterized queries |
| Fail Secure | On failure, system defaults to a secure (denying) state | Firewall drops all traffic if rules engine crashes |
| Economy of Mechanism | Keep security designs as simple as possible | Prefer simple crypto libraries over custom implementations |
| Complete Mediation | Check every access request every time; no caching of decisions | Re-validate session on every sensitive operation |
| Open Design | Security should not depend on secrecy of design (Kerckhoffs) | Security of AES relies on key secrecy, not algorithm secrecy |
| Psychological Acceptability | Security mechanisms must not make system harder to use | SSO reduces password fatigue while maintaining security |
| Least Common Mechanism | Minimize shared mechanisms between users | Separate database connections per tenant |
"Never trust, always verify." No implicit trust based on network location. Every request authenticated and authorized regardless of source. Key tenets: verify identity explicitly, use least privilege access, assume breach.
Once inside the network perimeter, entities are trusted. Dangerous in modern environments with remote work, cloud, and insider threats. The basis of the "castle-and-moat" security model now considered inadequate.
Risk = Likelihood × Impact. Risk responses: Accept, Avoid, Transfer (insurance), Mitigate (controls). Residual risk = risk remaining after controls. Risk appetite = amount of risk an organization is willing to accept.
Domain 2 (~14%). Security requirements define what the software must do (and must not allow) from a security perspective. They must be defined before design begins.
Specific security behaviors the system must implement. Examples: "The system shall enforce MFA for all admin accounts," "The system shall encrypt all PII at rest using AES-256," "Passwords shall be hashed using bcrypt with a cost factor ≥ 12."
Security quality attributes not tied to specific functions. Examples: "The authentication system shall respond within 500ms," "The system shall achieve 99.99% availability," "All API endpoints shall support TLS 1.2 minimum."
Requirements derived from threat modeling, compliance mandates, or system architecture decisions. Not explicitly stated by stakeholders but necessary to meet stated goals. Often discovered during threat modeling (STRIDE analysis).
| Source | Examples | Techniques |
|---|---|---|
| Stakeholders | Business owners, legal, compliance, end users | Interviews, workshops, surveys |
| Regulatory/Legal | GDPR, HIPAA, PCI-DSS, SOX, CCPA | Compliance gap analysis |
| Standards | OWASP ASVS, NIST SP 800-53, ISO 27001 | Requirements mapping |
| Threat Modeling | STRIDE analysis outputs | Attack tree analysis |
| Abuse Cases | Attacker scenarios, misuse cases | Negative use case analysis |
| Prior Incidents | Vulnerability history, breach post-mortems | Lessons-learned review |
"A registered user can log in with their email and password."
Describes intended, legitimate interactions with the system. Forms the basis of functional requirements. Security teams extend these into abuse cases.
"An attacker attempts to log in by brute-forcing passwords."
Describes how a malicious actor misuses the same functionality. Directly generates security requirements: rate limiting, account lockout, CAPTCHA, MFA.
Broader than an abuse case — includes accidental misuse by legitimate users, not just malicious intent. Example: "A user accidentally exports an entire customer database instead of their own records." Drives access scoping requirements.
| Level | Description | Examples | Controls Required |
|---|---|---|---|
| Public | Intentionally shared with anyone | Marketing materials, press releases | Integrity controls only |
| Internal | For internal use only | Employee handbook, meeting notes | Access controls, limited sharing |
| Confidential | Sensitive business data | Financial data, client lists, IP | Encryption at rest/transit, strict access |
| Restricted | Highest sensitivity; regulatory or legal protection | PII, PHI, PCI data, trade secrets | Strong encryption, audit logging, MFA, DLP |
Basic security. Verifiable by black-box testing. Minimum bar for all software. Covers most common OWASP Top 10 vulnerabilities. Appropriate for: low-risk applications, first pass on all software.
Standard for most applications. Requires security controls that defend against the majority of risks today. Appropriate for: applications handling sensitive data, business-critical systems.
Highest assurance level. Requires full documentation and source access. Appropriate for: critical infrastructure, military, medical devices, financial systems where failure could cause significant harm.
Regulatory compliance and privacy requirements are major drivers of software security requirements. Understanding key frameworks helps you map legal obligations to technical controls.
| Framework | Scope | Key Requirements | Penalty for Non-Compliance |
|---|---|---|---|
| GDPR | EU citizens' personal data, globally | Consent, right to erasure, data minimization, breach notification (72 hrs), DPO, Privacy by Design | Up to 4% global annual revenue or €20M |
| HIPAA | US healthcare — PHI (Protected Health Information) | Administrative, physical, technical safeguards; breach notification; Business Associate Agreements | $100–$50,000 per violation |
| PCI-DSS | Payment card data (anyone storing/processing/transmitting) | 12 requirements: network security, cardholder data protection, vulnerability management, access control, monitoring | Fines, loss of card processing rights |
| SOX | US publicly traded companies — financial reporting | Sections 302 (CEO/CFO attestation), 404 (internal controls), audit trails, access controls for financial systems | Criminal penalties, fines, delisting |
| CCPA | California consumers' personal data | Right to know, opt-out of sale, non-discrimination, deletion rights | $2,500–$7,500 per intentional violation |
| FISMA | US federal agencies and contractors | Risk management framework (NIST RMF), continuous monitoring, ATO (Authority to Operate) | Loss of federal contracts |
Anticipate and prevent privacy risks before they occur. Privacy embedded from the start — not discovered after the fact during audits or incidents.
Maximum privacy protection is the default — users shouldn't have to take action to protect their privacy. Data minimization by default.
Privacy is a core feature, not an add-on. It is integrated into the system architecture and business practices seamlessly.
Privacy and security achieve all legitimate objectives — no unnecessary trade-offs. Avoid false "privacy vs. usability" dichotomies.
Strong security measures throughout the entire lifecycle — from collection to retention to deletion. Secure destruction when data is no longer needed.
Components and operations remain visible and transparent — open to independent verification. No hidden agendas or secret data practices.
Keep the system user-centric. Provide strong privacy defaults, appropriate notice, and genuine user empowerment to control their own data.
| Concept | Definition | Implementation |
|---|---|---|
| Data Minimization | Collect only data necessary for the stated purpose | Remove optional fields; regularly audit what data is stored |
| Purpose Limitation | Use data only for the purpose it was collected | Separate data stores per purpose; access controls by purpose |
| Anonymization | Remove all identifying information irreversibly | Generalization, suppression, noise addition |
| Pseudonymization | Replace identifying info with pseudonyms (reversible with key) | Tokenization, key-based substitution; still considered personal data |
| Consent Management | Users explicitly agree to data collection and processing | Granular opt-in checkboxes, consent database, withdrawal mechanism |
| Right to Erasure | "Right to be forgotten" — users can request deletion | Data deletion workflows, cascade deletes, backup purging processes |
10 questions covering CIA triad, security principles, compliance frameworks, privacy, and security requirements. Select the best answer.
Six sticky mental anchors for the highest-yield Concepts & Requirements topics.
Click any card to flip it. 8 high-yield concept cards for rapid review.
👆 Click a card to reveal the answer
Select a topic for targeted exam-day guidance.