CSSLP Exam Prep · Topic 4 of 5

Secure Software Testing & Lifecycle Management

SAST · DAST · IAST · RASP · Fuzzing · Pen Testing · DevSecOps · Shift Left · Metrics

Study with Practice Tests →

Overview — Domains 5 & 6

Domains 5 (Secure Software Testing) and 6 (Secure Software Lifecycle Management) together account for approximately 25% of the CSSLP exam. This topic covers how and when to test, how to integrate security into every phase, and how to measure progress.

8-Domain Exam Blueprint
#DomainApprox. Weight
1Secure Software Concepts~10%
2Secure Software Requirements~14%
3Secure Software Architecture & Design~14%
4Secure Software Implementation~14%
5Secure Software Testing~14%
6Secure Software Lifecycle Management~11%
7Secure Software Supply Chain~13%
8Secure Software Deployment, Operations & Maintenance~10%

Core Concepts

SAST — Static Analysis

Source code or bytecode analysis without executing the application. Runs in the CI pipeline during the coding phase. Finds injection flaws, hardcoded secrets, and insecure API usage early and cheaply.

DAST — Dynamic Analysis

Tests a running application from outside (black box). Used in staging/testing phase. Finds XSS, SQLi, auth issues, and misconfigurations that only manifest at runtime.

IAST — Interactive Analysis

Instrumented agents sit inside the running application during test execution. Combines SAST+DAST advantages. Produces very low false positives with precise code location of findings.

RASP — Runtime Protection

NOT a test tool — production runtime protection. Embedded agents intercept DB queries, file ops, and network calls. Detects and blocks attacks in real time, can terminate sessions or raise alerts.

Fuzzing

Automated submission of malformed or random input to find crashes and memory corruption. Coverage-guided fuzzing (AFL, libFuzzer) tracks code paths and is the most efficient variant. Finds buffer overflows, parser bugs, DoS.

DevSecOps / Shift Left

Integrating security into every SDLC phase rather than bolting it on at the end. Security gates in CI/CD pipelines. Cost to fix at requirements is 1× vs 100× in production.

Exam Tips — Critical Distinctions

SAST = Static — no execution needed, runs during coding phase, language-specific.
DAST = Dynamic — requires running app, used in testing/staging phase, language-agnostic.
IAST = Inside — lowest false positives, instrumented agents during QA testing with real traffic.
RASP = Runtime production defense — NOT a testing tool. Blocks attacks; IAST only finds them.
Pen test timing — before major releases, annually at minimum, and after major architecture changes. PCI-DSS mandates annually.

Security Testing Types

Understanding each technique's timing, strengths, and limitations is essential for the CSSLP exam. Questions often test which method to apply in a given scenario.

SAST — Static Application Security Testing

How it works

Analyzes source code, bytecode, or binary without executing the application. Builds ASTs (abstract syntax trees) and data flow graphs to detect vulnerability patterns.

When to Use

  • During coding phase
  • Integrated into CI/CD pipeline (every commit)
  • Before code review as a pre-screen

What it Finds

  • SQL injection, XSS patterns in code
  • Hardcoded secrets and credentials
  • Insecure API usage
  • Buffer overflows (C/C++)

Tools

  • Checkmarx, SonarQube
  • Semgrep (multi-language)
  • Bandit (Python-specific)
  • Coverity, Fortify

Limitations vs Strengths

  • ❌ High false positives — needs tuning
  • ❌ Language-specific rules required
  • ❌ Can't find runtime/config issues
  • ✅ Early feedback — shift left
  • ✅ No running environment needed

DAST — Dynamic Application Security Testing

How it works

Tests the running application from the outside as a black box — like an external attacker. Sends crafted HTTP requests and analyzes responses. No source code access required.

When to Use

  • Testing and staging phases
  • Before production release
  • Scheduled regression scans

What it Finds

  • XSS (reflected, stored)
  • SQL injection (runtime confirmation)
  • Authentication and session issues
  • Security misconfigurations

Tools

  • OWASP ZAP (free)
  • Burp Suite (industry standard)
  • Acunetix, Nessus Web Scanner

Limitations vs Strengths

  • ❌ No source code access — less precise
  • ❌ Slower than SAST
  • ❌ Misses complex business logic
  • ✅ Low false positives (real requests)
  • ✅ Language-agnostic — tests real system

IAST — Interactive Application Security Testing

How it works

Agents are instrumented inside the running application (bytecode instrumentation) and observe execution during QA test runs. Combines inside knowledge (like SAST) with real-traffic testing (like DAST).

When to Use

  • QA testing phase with real traffic
  • During automated integration testing
  • Alongside DAST for maximum coverage

Strengths

  • ✅ Very low false positive rate
  • ✅ Pinpoints exact code location
  • ✅ Works inside the app — best coverage

Tools

  • Contrast Security
  • Seeker by Synopsys
  • HDIV Detection

Key Distinction

IAST is a testing tool used in test environments. It produces reports and findings. Do not confuse with RASP, which uses similar agent technology but in production to block attacks.

RASP — Runtime Application Self-Protection

Critical exam point: RASP is NOT a testing tool. It is a production defense mechanism. The question "which tool blocks attacks in production?" → RASP.

How it Works

Agents embedded in the running production application intercept calls — DB queries, file operations, network calls — and inspect them in real time against attack signatures.

Capabilities

  • Detect and block SQLi, command injection
  • Terminate malicious sessions
  • Alert SIEM/SOC in real time
  • Protect without patching the app

IAST vs RASP

  • IAST → test environment → finds vulns → report
  • RASP → production → blocks attacks → alerts
  • Same agent tech, opposite purpose and phase

Fuzzing

Fuzzing Types

  • Dumb Fuzzing: purely random bytes — simplest, least efficient
  • Smart / Generation-based: structured format with mutations
  • Mutation-based: mutate valid seed inputs
  • Coverage-guided (AFL, libFuzzer): tracks code paths, mutates toward uncovered branches — most efficient

What it Finds

  • Buffer overflows
  • Integer overflows
  • Format string vulnerabilities
  • Parser crashes and DoS
  • Memory corruption issues

Tools

  • AFL++ (coverage-guided)
  • libFuzzer (LLVM-based)
  • Boofuzz (protocol fuzzing)
  • Google OSS-Fuzz (continuous cloud fuzzing)

Penetration Testing

Types

  • Black Box: no knowledge — external attacker simulation, most realistic
  • White Box: full source code + architecture — most thorough
  • Grey Box: partial knowledge — common for web app tests
  • Red Team: adversary simulation including physical + social + technical

Phases

  1. Reconnaissance (passive + active)
  2. Scanning (ports, services, vuln scan)
  3. Exploitation (exploit findings)
  4. Post-Exploitation (lateral movement, priv esc)
  5. Reporting (CVSS ratings, remediation steps)

When Required

  • Before major releases
  • Annually (PCI-DSS mandate)
  • After major architecture changes
  • Post-incident (to find remaining exposure)

Pen Test vs Vuln Scan

Vulnerability scanning identifies potential weaknesses. Penetration testing actively exploits them to confirm impact. Pen test requires Rules of Engagement (RoE) document signed before starting.

Comparison Table

MethodAccessWhen in SDLCFalse PositivesKey Strength
SASTSource code / bytecodeCoding / CI pipelineHighEarliest feedback, shift left
DASTRunning app (black box)Testing / StagingLowTests real system, language-agnostic
IASTInside running app (agent)QA TestingVery LowPrecise code location + real traffic
RASPInside running app (agent)Production (not testing)N/ABlocks attacks in real time
FuzzingApp interface / APITesting / CILowFinds memory corruption, crashes
Pen TestBlack / Grey / White boxPre-release / AnnualVery LowConfirms real exploitability

DevSecOps & SDLC Integration

Shift Left and DevSecOps are not buzzwords — they represent a fundamental cost and quality argument. The earlier a vulnerability is found, the cheaper and faster it is to fix.

Shift Left — The Cost Argument

Relative Cost to Fix by Phase

  • Requirements:
  • Design:
  • Implementation: 10×
  • Testing: 20×
  • Production: 100×

Shift Left Activities

  • Threat modeling in design phase
  • SAST integrated into CI pipeline
  • Security code review (4-eyes principle)
  • Developer security training
  • Security acceptance criteria in Definition of Done

CI/CD Security Gates — Phase by Phase

GateTools / ActivitiesAction on Failure
Pre-commitGit hooks, secret scanning (truffleHog, git-secrets)Block commit — never let secrets enter repo
BuildSAST scan (Checkmarx, SonarQube), SCA/dependency scan (Snyk, OWASP Dependency-Check)Fail build on critical findings
TestDAST against staging (ZAP, Burp), IAST during QA, security regression testsBlock promotion to pre-release
Pre-releasePen test sign-off, security review, compliance checkHalt release — requires sign-off
DeployRASP agents, WAF rules, infrastructure scanningAlert + block malicious traffic
OperateContinuous vuln scanning, threat intel feeds, patch managementSLA-driven remediation tickets

Security in Agile Sprints

Backlog & Planning

  • Security user stories in product backlog
  • Abuse stories (negative functional requirements)
  • Security acceptance criteria in every story

Definition of Done (DoD)

  • SAST scan run — no critical findings
  • Security code review completed (4-eyes)
  • No hardcoded secrets detected
  • Security regression tests pass
  • Security acceptance criteria verified

Security Champion Model

  • Embedded security advocate in each team
  • First point of contact for security questions
  • Does NOT replace the security team
  • Receives additional security training
  • Reviews security aspects of PRs

Ongoing Activities

  • Update threat model on architecture changes
  • Security retrospective each sprint
  • Track security debt in backlog
  • Automated SAST/DAST in CI pipeline

SDLC Security Activities by Phase

PhaseSecurity ActivityKey Artifact
RequirementsDefine security requirements, abuse cases, privacy requirementsSecurity requirements document, abuse case list
DesignThreat modeling (STRIDE), security architecture review, attack surface analysisThreat model (DFD + STRIDE), security design doc
ImplementationSecure coding standards, SAST in CI, security code reviewSAST reports, peer review records
TestingDAST, IAST, fuzzing, security test cases, security regression testingDAST/IAST reports, pen test plan
DeploymentPen test sign-off, RASP config, WAF rules, hardening checklistPen test report, go/no-go security sign-off
MaintenancePatch management, vuln scanning, incident response, SCA updatesPatch compliance report, MTTD/MTTR metrics

Third-Party & Software Composition Analysis (SCA)

Why SCA Matters

SCA scans open-source and third-party dependencies for known CVEs using databases such as NVD, OSV, and the GitHub Advisory Database. You are responsible for ALL code in your product binary — including every open-source library you pull in.

SCA Tools

  • Snyk (developer-friendly)
  • OWASP Dependency-Check (free)
  • Black Duck (enterprise)
  • Dependabot (GitHub-native)

Log4Shell Lesson (CVE-2021-44228)

Critical RCE in Apache Log4j used by millions of Java applications. Exploitable with a single crafted string in a log message. Demonstrated that transitive dependencies (libraries used by your libraries) are equally dangerous. SCA finds these automatically.

Metrics & Policy

Security metrics prove the ROI of your program and guide prioritization. Policies provide the authority and SLAs that drive action. Know both for the CSSLP exam.

Core Security Metrics

MTTD — Mean Time to Detect

Average time from vulnerability introduction to discovery. Lower = better monitoring. Measures how quickly your detection tooling finds issues. Report trend over time to management.

MTTR — Mean Time to Remediate

Average time from vulnerability discovery to fix deployed. Lower = faster fixes. Affected by team capacity, process efficiency, and complexity. Pair with MTTD for complete picture.

Vulnerability Density

Number of vulnerabilities per thousand lines of code (KLOC). Lower = better code quality. Track per release and per team to identify where investment in training/tooling is needed.

Patch Compliance Rate

Percentage of vulnerabilities patched within their SLA window. Target: 100% for Critical and High. Gate production deployments on compliance — SLAs without enforcement are suggestions.

False Positive Rate

Percentage of SAST/DAST findings that are not real vulnerabilities. Lower = better-tuned tools. High false positive rates cause alert fatigue and erode developer trust in scanning tools.

Security Test Coverage

Percentage of code covered by security tests (SAST, unit tests for security functions). Higher = better. Pair with vuln density — high coverage + low density = mature program.

Vulnerability SLAs by CVSS Score

SeverityCVSS RangeRemediation SLAKey Action
Critical9.0 – 10.024–48 hoursEmergency patch, escalate immediately
High7.0 – 8.97 daysPriority fix in current or next sprint
Medium4.0 – 6.930 daysScheduled fix with compensating controls
Low0.1 – 3.990 daysBacklog item, fix in regular maintenance

Security Policy Types

Acceptable Use Policy (AUP)

Defines permitted and prohibited use of organizational systems, data, and networks. Covers social engineering awareness, data handling, and personal use limits.

Secure Development Policy

Mandates security practices in the SDLC: code review requirements (4-eyes), SAST integration in CI, pen test before release, secure coding standards compliance (OWASP Top 10, SANS Top 25).

Vulnerability Management Policy

Defines patching SLAs by severity, scanning cadence, exception process, and escalation paths. The authoritative document that makes CVSS SLAs binding — not just guidelines.

Incident Response Policy

Defines roles, responsibilities, escalation paths, and communication procedures for security incidents. Requires regular tabletop exercises and post-incident reviews.

Change Management Policy

4-eyes principle for production changes, Change Advisory Board (CAB) approval for high-risk changes, rollback requirements, and emergency change procedures. Prevents unauthorized modifications.

Developer Security Training

Training TypeContentAudience
AwarenessPhishing recognition, social engineering, data classification, data handlingAll employees
Secure CodingOWASP Top 10, SANS Top 25, language-specific risks, secure design patternsAll developers
Security ChampionsAdvanced threat modeling, SAST/DAST tool usage, security review techniquesSelected developers per team
Bug BountyEncourages responsible disclosure from external researchers — provides ongoing real-world testingExternal researchers

Practice Quiz — 10 Questions

Select the best answer for each question, then submit to see your score and explanations.

1. SAST analyzes:
2. Which approach has the lowest false positive rate because it tests the actual running application as a black box?
3. IAST differs from DAST primarily because:
4. Fuzzing is most effective at finding which vulnerability category?
5. "Shift left" security means:
6. RASP differs from IAST in that RASP:
7. Which metric measures the average time from vulnerability introduction to discovery?
8. A Security Champion in an Agile team is responsible for:
9. Which pen test type gives the tester full access to source code and architecture documents?
10. Software Composition Analysis (SCA) is used to:

Memory Hooks

Six high-retention mnemonics engineered for the CSSLP exam. Each card locks in a key distinction or framework you need cold.

🔬
Testing Tool Ladder
"SAST=Code · DAST=Running · IAST=Inside · RASP=Production"
SAST: no execution (dead code). DAST: live app from outside (black box). IAST: inside the running app during testing (instrumented). RASP: inside running production app (blocks attacks). IAST=testing, RASP=protection. Know the distinction cold.
⬅️
Shift Left
"Fix in Requirements (1×) not Production (100×)"
Cost multiplies 10× per SDLC phase. Shift left: threat model in design, SAST in CI pipeline, security criteria in Definition of Done. Earlier discovery = faster feedback loop + exponentially cheaper to fix.
🔨
Fuzzing Types
"Dumb · Smart · Mutation · Coverage-Guided (Best)"
Dumb: random bytes. Smart/generation: structured format with mutations. Mutation: take valid input, mutate it. Coverage-guided (AFL, libFuzzer): tracks code paths, mutates toward uncovered branches — most efficient, finds deepest bugs. OSS-Fuzz has found thousands of bugs in open-source projects.
📊
Security Metrics Trio
"MTTD · MTTR · Density"
MTTD=how fast you find issues. MTTR=how fast you fix them. Density=bugs per KLOC (quality indicator). All should decrease over time as security program matures. Report to management: improving these shows ROI on security investment.
🔄
CI/CD Gates
"Commit→Build→Test→Release→Deploy→Monitor"
Pre-commit: secrets scan. Build: SAST+SCA. Test: DAST+IAST. Pre-release: pen test. Deploy: RASP+WAF. Monitor: alerts+threat intel. Critical SAST finding MUST fail the build — don't let broken code progress.
🏆
Pen Test Types
"Black=No Info · Grey=Some · White=Full"
Black box: external attacker simulation, most realistic, least efficient. Grey box: web app tests (common). White box: most thorough, full source+arch access. Red Team: adversary simulation including physical+social+technical. Scope + Rules of Engagement defined upfront.

Flashcards & Study Advisor

Click any card to flip and reveal the full explanation. Use the Study Advisor below to get targeted topic guidance.

👆 Click a card to flip it

SAST vs DAST Timing
When does each tool run in the SDLC?
SAST: during development and CI — no running app needed, fast feedback to developers. DAST: requires running app — staging/testing phase. SAST=static (source), DAST=dynamic (running). Neither replaces the other — use both for coverage. SAST catches issues earlier; DAST finds runtime and configuration issues.
Coverage-Guided Fuzzing
How does AFL/libFuzzer work?
AFL/libFuzzer track which code branches are covered by inputs, mutate toward uncovered paths. Far more efficient than dumb random fuzzing. Google OSS-Fuzz applies this to open source projects continuously. Corpus = set of seed inputs that gets mutated. Crashes = potential vulnerabilities to triage.
Security in Definition of Done
What security criteria must every sprint meet?
Agile DoD security criteria: SAST scan run with no critical findings, security code review completed (4-eyes), security acceptance criteria for all stories verified, no hardcoded secrets detected, security regression tests pass. Every sprint must meet DoD — security not deferred to a "security sprint."
SCA — Software Composition Analysis
What does SCA find and why does it matter?
Scans third-party/open-source dependencies for known CVEs (NVD, OSV, GitHub Advisory DB). Tools: Snyk, OWASP Dependency-Check, Black Duck, Dependabot. Log4Shell (CVE-2021-44228): critical RCE in Log4j used by millions — shows importance of SCA. You own ALL code in your binary including dependencies.
Vulnerability SLAs
How fast must each severity level be patched?
Critical (CVSS 9-10): 24-48 hours. High (7-8.9): 7 days. Medium (4-6.9): 30 days. Low (0.1-3.9): 90 days. Defined in Vulnerability Management Policy. Measured by Patch Compliance Rate. SLAs without enforcement are just suggestions — gate deployments on compliance.
IAST vs RASP
Same technology — different purpose and phase
Same agent technology, different purpose and phase: IAST in TEST environment — finds vulnerabilities, produces report with code location. RASP in PRODUCTION — blocks attack attempts in real time, generates security alerts. IAST diagnoses. RASP treats. Both are "inside" the app but serve opposite lifecycle purposes.
Pen Test Phases
The 5 phases of a penetration test
Reconnaissance (passive+active info gathering) → Scanning (ports, services, vuln scan) → Exploitation (exploit findings) → Post-Exploitation (lateral movement, priv esc, data exfil simulation) → Reporting (findings, CVSS ratings, remediation steps). Always preceded by Rules of Engagement (RoE) document defining scope and authorization.
Security Debt
How to measure and manage accumulated vulnerabilities
Accumulated deferred security fixes = security debt. Like technical debt — grows with interest over time. Measure: open vulnerabilities × avg severity. Track trend: increasing debt = deteriorating posture. Manage in sprints: prioritize by risk score, set remediation SLAs, don't let critical debt age past SLA.

Study Advisor

Select a topic area to get targeted study guidance.

SAST & DAST

  • CI/CD integration points — where in the pipeline does each tool run?
  • Tuning false positive rate — start with critical rules only, expand coverage iteratively
  • Combining SAST+DAST for layered coverage — complementary, not redundant
  • SAST for compliance evidence — automated audit trail for SOC 2, PCI-DSS
  • DAST authenticated scanning setup — requires session token or credential injection

Ready to Pass the CSSLP?

Practice with full-length adaptive exams covering all 8 domains.

Start Free Practice Tests →