Quiz-summary
0 of 19 questions completed
Questions:
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
Information
Premium Practice Questions
You have already completed the quiz before. Hence you can not start it again.
Quiz is loading...
You must sign in or sign up to start the quiz.
You have to finish following quiz, to start this quiz:
Results
0 of 19 questions answered correctly
Your time:
Time has elapsed
Categories
- Not categorized 0%
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- Answered
- Review
-
Question 1 of 19
1. Question
As a lead cybersecurity analyst for a major financial services firm in New York, you are tasked with conducting a semi-annual cyber threat landscape analysis. Following the recent SEC enhancement of cybersecurity risk management and disclosure rules, your report must provide a strategic view of external pressures. During the assessment, you observe an increase in sophisticated social engineering campaigns targeting executive leadership across the US banking sector. Which approach best fulfills the requirements for a comprehensive threat landscape analysis in this regulatory context?
Correct
Correct: Evaluating the intent, capability, and opportunity of threat actors is a core component of threat landscape analysis. This approach aligns with NIST SP 800-30 risk assessment standards and SEC requirements for firms to describe their processes for assessing, identifying, and managing material risks from cybersecurity threats. By understanding the ‘who’ and ‘how’ behind potential attacks, the organization can better anticipate and defend against sector-specific threats like executive-targeted social engineering.
Incorrect: Relying solely on internal availability metrics fails to account for external shifts in adversary tactics and emerging threats that have not yet manifested as incidents. Simply adopting a compliance-only approach ignores the dynamic nature of the threat landscape and may leave the organization vulnerable to targeted attacks not covered by static baselines. Opting to patch every bug without considering exploitability or threat relevance is an inefficient use of resources that contradicts the risk-based prioritization recommended by US regulatory frameworks.
Takeaway: Effective threat landscape analysis requires evaluating adversary intent and capabilities to prioritize risk mitigation within a US regulatory framework.
Incorrect
Correct: Evaluating the intent, capability, and opportunity of threat actors is a core component of threat landscape analysis. This approach aligns with NIST SP 800-30 risk assessment standards and SEC requirements for firms to describe their processes for assessing, identifying, and managing material risks from cybersecurity threats. By understanding the ‘who’ and ‘how’ behind potential attacks, the organization can better anticipate and defend against sector-specific threats like executive-targeted social engineering.
Incorrect: Relying solely on internal availability metrics fails to account for external shifts in adversary tactics and emerging threats that have not yet manifested as incidents. Simply adopting a compliance-only approach ignores the dynamic nature of the threat landscape and may leave the organization vulnerable to targeted attacks not covered by static baselines. Opting to patch every bug without considering exploitability or threat relevance is an inefficient use of resources that contradicts the risk-based prioritization recommended by US regulatory frameworks.
Takeaway: Effective threat landscape analysis requires evaluating adversary intent and capabilities to prioritize risk mitigation within a US regulatory framework.
-
Question 2 of 19
2. Question
A financial institution regulated by the Securities and Exchange Commission (SEC) is upgrading its internal communications security to protect sensitive trading data. The security architecture team identifies that the current method of checking certificate validity relies on periodic file downloads, which creates a window of vulnerability between updates. To ensure compliance with high-availability requirements and provide immediate revocation status for individual certificates, the team needs to implement a request-response mechanism. Which PKI component or protocol is best suited to provide this real-time validation while minimizing network overhead?
Correct
Correct: The Online Certificate Status Protocol (OCSP) allows for real-time verification by querying a responder for the status of a specific certificate, which eliminates the need for clients to download and parse large revocation files.
Incorrect: Relying solely on Certificate Revocation Lists involves a time-lag between the revocation of a certificate and the publication of a new list, which fails to meet the requirement for immediate status updates. The strategy of focusing only on the Registration Authority is inappropriate because this entity handles the administrative tasks of verifying identities during the certificate issuance process rather than checking status during a handshake. Choosing to implement a Key Escrow Service is irrelevant to certificate status checking as it primarily deals with the secure storage of private keys to ensure data recovery for legal or administrative purposes.
Incorrect
Correct: The Online Certificate Status Protocol (OCSP) allows for real-time verification by querying a responder for the status of a specific certificate, which eliminates the need for clients to download and parse large revocation files.
Incorrect: Relying solely on Certificate Revocation Lists involves a time-lag between the revocation of a certificate and the publication of a new list, which fails to meet the requirement for immediate status updates. The strategy of focusing only on the Registration Authority is inappropriate because this entity handles the administrative tasks of verifying identities during the certificate issuance process rather than checking status during a handshake. Choosing to implement a Key Escrow Service is irrelevant to certificate status checking as it primarily deals with the secure storage of private keys to ensure data recovery for legal or administrative purposes.
-
Question 3 of 19
3. Question
A US-based security officer is hardening server infrastructure to align with NIST SP 800-53 requirements. When selecting hardening guides for Windows and Linux environments, which approach ensures the most robust security posture?
Correct
Correct: The NIST National Checklist Program and DISA STIGs provide detailed configuration guidance tailored to specific operating system versions. These resources ensure all security controls required by federal standards are properly implemented to reduce the attack surface.
Incorrect: Relying solely on manufacturer quick-start guides often prioritizes ease of use over security, leaving many vulnerable default settings intact. Simply conducting high-level principles from governance frameworks lacks the technical specificity needed for OS hardening. The strategy of focusing only on closing open ports via scanning is a reactive measure that misses critical internal system hardening like file system permissions. Opting for generic security templates fails to address the unique vulnerabilities of specific operating system versions.
Incorrect
Correct: The NIST National Checklist Program and DISA STIGs provide detailed configuration guidance tailored to specific operating system versions. These resources ensure all security controls required by federal standards are properly implemented to reduce the attack surface.
Incorrect: Relying solely on manufacturer quick-start guides often prioritizes ease of use over security, leaving many vulnerable default settings intact. Simply conducting high-level principles from governance frameworks lacks the technical specificity needed for OS hardening. The strategy of focusing only on closing open ports via scanning is a reactive measure that misses critical internal system hardening like file system permissions. Opting for generic security templates fails to address the unique vulnerabilities of specific operating system versions.
-
Question 4 of 19
4. Question
A financial services firm based in the United States is updating its internal software development policy to align with the NIST Secure Software Development Framework (SSDF). The development team is currently in the design phase of a new cloud-based application that will handle sensitive customer data. To minimize the cost-to-fix ratio, the project manager wants to identify structural security weaknesses before the implementation phase begins. Which activity, when integrated into this specific phase of the lifecycle, most effectively addresses the goal of proactive risk mitigation?
Correct
Correct: Threat modeling is a proactive security activity performed during the design phase of the SSDLC. By identifying potential threats, attack vectors, and trust boundaries before any code is written, organizations can address fundamental architectural flaws. This alignment with NIST SSDF principles ensures that security is baked in rather than bolted on, which is significantly more cost-effective than remediating vulnerabilities discovered during later testing or production phases.
Incorrect: Relying on dynamic testing is ineffective at this stage because it requires a functional, running application. Simply performing manual code reviews is a reactive measure that focuses on implementation errors rather than high-level design weaknesses. The strategy of using automated static analysis tools is valuable for catching syntax-related vulnerabilities during the coding phase but cannot identify logic flaws or systemic architectural issues inherent in the initial design.
Takeaway: Early integration of threat modeling in the design phase identifies architectural risks when they are least expensive to remediate.
Incorrect
Correct: Threat modeling is a proactive security activity performed during the design phase of the SSDLC. By identifying potential threats, attack vectors, and trust boundaries before any code is written, organizations can address fundamental architectural flaws. This alignment with NIST SSDF principles ensures that security is baked in rather than bolted on, which is significantly more cost-effective than remediating vulnerabilities discovered during later testing or production phases.
Incorrect: Relying on dynamic testing is ineffective at this stage because it requires a functional, running application. Simply performing manual code reviews is a reactive measure that focuses on implementation errors rather than high-level design weaknesses. The strategy of using automated static analysis tools is valuable for catching syntax-related vulnerabilities during the coding phase but cannot identify logic flaws or systemic architectural issues inherent in the initial design.
Takeaway: Early integration of threat modeling in the design phase identifies architectural risks when they are least expensive to remediate.
-
Question 5 of 19
5. Question
During a quarterly risk assessment at a publicly traded financial services firm in the United States, the compliance team identifies a breach involving unauthorized access to customer brokerage accounts. The Chief Information Security Officer (CISO) must now determine the reporting obligations under federal securities regulations. According to the SEC rules on cybersecurity risk management, strategy, governance, and incident disclosure, what is the primary requirement for reporting this incident?
Correct
Correct: Under the SEC rules adopted in 2023, registrants are required to disclose any cybersecurity incident they determine to be material on Form 8-K. This disclosure is generally due within four business days after the registrant makes the determination that the incident is material. The filing must describe the material aspects of the incident’s nature, scope, and timing, as well as its material impact or reasonably likely material impact on the registrant.
Incorrect: Relying on a 24-hour notification window misinterprets the federal timeline, which triggers from the materiality determination rather than the initial discovery. The strategy of providing a deep technical analysis in the 10-K is incorrect because the 10-K focuses on annual disclosures of risk management and governance processes rather than the immediate reporting of a specific breach. Choosing to wait for Department of Justice clearance is only applicable in very specific national security or public safety exceptions and is not the standard primary requirement for all material breaches.
Takeaway: US public companies must disclose material cybersecurity incidents on Form 8-K within four business days of the materiality determination.
Incorrect
Correct: Under the SEC rules adopted in 2023, registrants are required to disclose any cybersecurity incident they determine to be material on Form 8-K. This disclosure is generally due within four business days after the registrant makes the determination that the incident is material. The filing must describe the material aspects of the incident’s nature, scope, and timing, as well as its material impact or reasonably likely material impact on the registrant.
Incorrect: Relying on a 24-hour notification window misinterprets the federal timeline, which triggers from the materiality determination rather than the initial discovery. The strategy of providing a deep technical analysis in the 10-K is incorrect because the 10-K focuses on annual disclosures of risk management and governance processes rather than the immediate reporting of a specific breach. Choosing to wait for Department of Justice clearance is only applicable in very specific national security or public safety exceptions and is not the standard primary requirement for all material breaches.
Takeaway: US public companies must disclose material cybersecurity incidents on Form 8-K within four business days of the materiality determination.
-
Question 6 of 19
6. Question
A compliance officer at a major United States-based broker-dealer is overseeing the migration of client financial records to a new encrypted database. To meet the requirements of the SEC Safeguards Rule regarding the protection of customer records and information, the technical team must implement a solution that balances high-speed processing for large datasets with a secure method for sharing encryption keys across distributed offices. Which cryptographic approach best addresses these operational and regulatory requirements?
Correct
Correct: Hybrid encryption is the industry standard for this scenario because it combines the efficiency of symmetric algorithms, such as AES, for encrypting large volumes of data with the secure key distribution properties of asymmetric algorithms like RSA. This approach satisfies NIST guidelines and SEC expectations for protecting non-public personal information by ensuring both performance and secure access control.
Incorrect: Choosing to use asymmetric encryption for bulk data storage is computationally prohibitive and would lead to significant system latency in a high-volume financial environment. The strategy of applying hashing algorithms is fundamentally flawed for data storage because hashing is a one-way function intended for integrity verification, not for the reversible encryption required to access records. Relying on a one-time pad is operationally impossible for a distributed database due to the insurmountable challenge of generating and securely distributing keys that match the size of the data being encrypted.
Takeaway: Hybrid encryption provides the necessary balance of performance and security required for protecting large-scale sensitive data in regulated environments.
Incorrect
Correct: Hybrid encryption is the industry standard for this scenario because it combines the efficiency of symmetric algorithms, such as AES, for encrypting large volumes of data with the secure key distribution properties of asymmetric algorithms like RSA. This approach satisfies NIST guidelines and SEC expectations for protecting non-public personal information by ensuring both performance and secure access control.
Incorrect: Choosing to use asymmetric encryption for bulk data storage is computationally prohibitive and would lead to significant system latency in a high-volume financial environment. The strategy of applying hashing algorithms is fundamentally flawed for data storage because hashing is a one-way function intended for integrity verification, not for the reversible encryption required to access records. Relying on a one-time pad is operationally impossible for a distributed database due to the insurmountable challenge of generating and securely distributing keys that match the size of the data being encrypted.
Takeaway: Hybrid encryption provides the necessary balance of performance and security required for protecting large-scale sensitive data in regulated environments.
-
Question 7 of 19
7. Question
A United States-based financial services firm is reviewing its security controls for a new electronic record-keeping system to ensure compliance with SEC Rule 17a-4. The security architect emphasizes the need to protect the integrity of the stored data. Which of the following measures is most appropriate for achieving this goal?
Correct
Correct: Integrity within the CIA triad ensures that information is accurate and has not been modified by unauthorized parties. Under United States federal regulations like SEC Rule 17a-4, firms must use electronic storage systems that preserve records in a non-rewriteable and non-erasable format. This requirement is technically supported by hashing and digital signatures, which provide a verifiable audit trail to prove data has remained unchanged since its creation.
Incorrect
Correct: Integrity within the CIA triad ensures that information is accurate and has not been modified by unauthorized parties. Under United States federal regulations like SEC Rule 17a-4, firms must use electronic storage systems that preserve records in a non-rewriteable and non-erasable format. This requirement is technically supported by hashing and digital signatures, which provide a verifiable audit trail to prove data has remained unchanged since its creation.
-
Question 8 of 19
8. Question
A United States financial institution, operating under the oversight of the Securities and Exchange Commission (SEC) and following NIST cybersecurity frameworks, is redesigning its infrastructure to mitigate large-scale volumetric Distributed Denial of Service (DDoS) attacks. The institution’s primary concern is preventing its internet service provider (ISP) circuits from becoming saturated during an attack. Which of the following strategies provides the most effective protection against this specific threat to availability?
Correct
Correct: Cloud-based scrubbing services combined with Anycast routing are the most effective defense against volumetric attacks because they intercept and filter malicious traffic at a global scale before it reaches the organization’s local network. This approach prevents the local ISP circuit from being overwhelmed, which is essential for maintaining operational resilience as required by SEC Regulation SCI and FFIEC guidelines regarding service availability.
Incorrect: Relying on on-premises firewalls is insufficient because the malicious traffic will saturate the ISP link before the firewall can process or drop the packets. The strategy of simply increasing bandwidth is often ineffective and cost-prohibitive, as modern botnets can generate traffic volumes that far exceed even the most robust commercial ISP circuits. Choosing to use internal load balancers helps manage server resources but fails to address the network-level congestion that occurs at the gateway during a volumetric attack.
Takeaway: Volumetric DDoS mitigation requires off-site traffic scrubbing to prevent local network saturation and ensure continuous availability of critical financial services.
Incorrect
Correct: Cloud-based scrubbing services combined with Anycast routing are the most effective defense against volumetric attacks because they intercept and filter malicious traffic at a global scale before it reaches the organization’s local network. This approach prevents the local ISP circuit from being overwhelmed, which is essential for maintaining operational resilience as required by SEC Regulation SCI and FFIEC guidelines regarding service availability.
Incorrect: Relying on on-premises firewalls is insufficient because the malicious traffic will saturate the ISP link before the firewall can process or drop the packets. The strategy of simply increasing bandwidth is often ineffective and cost-prohibitive, as modern botnets can generate traffic volumes that far exceed even the most robust commercial ISP circuits. Choosing to use internal load balancers helps manage server resources but fails to address the network-level congestion that occurs at the gateway during a volumetric attack.
Takeaway: Volumetric DDoS mitigation requires off-site traffic scrubbing to prevent local network saturation and ensure continuous availability of critical financial services.
-
Question 9 of 19
9. Question
You are a cybersecurity risk analyst at a financial services firm in the United States. During a scheduled risk assessment of a legacy web application that processes XML-based credit data, you identify that the XML parser allows the definition of external entities. Given the sensitivity of the consumer financial data protected under the Gramm-Leach-Bliley Act (GLBA), which of the following represents the most effective technical control to mitigate the risk of an XML External Entity (XXE) attack?
Correct
Correct: The most effective way to prevent XXE is to disable the underlying features of the XML parser that allow for external entity resolution. By disabling Document Type Definitions (DTDs) entirely or configuring the parser to ignore external entities, the application is no longer vulnerable to data exfiltration or Server-Side Request Forgery (SSRF) attempts. This aligns with the security standards required to protect non-public personal information under the Gramm-Leach-Bliley Act (GLBA).
Incorrect: Relying solely on a Web Application Firewall is often insufficient because attackers can use various encoding techniques or alternative XML syntax to bypass simple string-based signatures. The strategy of using schema validation (XSD) ensures the document structure is correct but does not inherently stop a vulnerable parser from resolving malicious entities defined within the DTD. Opting for transport layer security only protects data from being intercepted while in transit and fails to address the vulnerability in how the application processes the data once it arrives.
Takeaway: The primary defense against XXE vulnerabilities is disabling DTD processing or external entity resolution within the XML parser configuration.
Incorrect
Correct: The most effective way to prevent XXE is to disable the underlying features of the XML parser that allow for external entity resolution. By disabling Document Type Definitions (DTDs) entirely or configuring the parser to ignore external entities, the application is no longer vulnerable to data exfiltration or Server-Side Request Forgery (SSRF) attempts. This aligns with the security standards required to protect non-public personal information under the Gramm-Leach-Bliley Act (GLBA).
Incorrect: Relying solely on a Web Application Firewall is often insufficient because attackers can use various encoding techniques or alternative XML syntax to bypass simple string-based signatures. The strategy of using schema validation (XSD) ensures the document structure is correct but does not inherently stop a vulnerable parser from resolving malicious entities defined within the DTD. Opting for transport layer security only protects data from being intercepted while in transit and fails to address the vulnerability in how the application processes the data once it arrives.
Takeaway: The primary defense against XXE vulnerabilities is disabling DTD processing or external entity resolution within the XML parser configuration.
-
Question 10 of 19
10. Question
A US-based financial services firm is updating its network security architecture to better protect against sophisticated layer 7 attacks and unauthorized data exfiltration. The IT security team needs a solution that goes beyond simple packet filtering and connection tracking to identify specific software functions and user identities within encrypted traffic. Which firewall technology most effectively addresses these requirements by integrating deep packet inspection with application-level awareness?
Correct
Correct: Next-Generation Firewalls (NGFW) integrate traditional firewall capabilities with deep packet inspection. This allows them to identify and control specific applications regardless of the port used. This technology supports the granular security controls required by US financial regulations for protecting sensitive consumer data.
Incorrect: Relying on stateful inspection provides connection tracking but fails to analyze the actual payload of the packets for application-specific threats. The strategy of using circuit-level gateways only validates session-layer handshakes without inspecting the data content for malicious activity. Opting for static packet filters is insufficient because they only examine basic header information and cannot detect sophisticated attacks.
Takeaway: Next-Generation Firewalls offer deep packet inspection and application-level visibility to protect against modern, complex cyber threats in regulated environments.
Incorrect
Correct: Next-Generation Firewalls (NGFW) integrate traditional firewall capabilities with deep packet inspection. This allows them to identify and control specific applications regardless of the port used. This technology supports the granular security controls required by US financial regulations for protecting sensitive consumer data.
Incorrect: Relying on stateful inspection provides connection tracking but fails to analyze the actual payload of the packets for application-specific threats. The strategy of using circuit-level gateways only validates session-layer handshakes without inspecting the data content for malicious activity. Opting for static packet filters is insufficient because they only examine basic header information and cannot detect sophisticated attacks.
Takeaway: Next-Generation Firewalls offer deep packet inspection and application-level visibility to protect against modern, complex cyber threats in regulated environments.
-
Question 11 of 19
11. Question
A security operations center (SOC) lead at a U.S. national bank is reviewing alerts from the past 24 hours. An anomaly detection system flagged a persistent outbound connection from a server governed by the Gramm-Leach-Bliley Act (GLBA) data protection standards. The traffic is encrypted and bypasses the standard web proxy. To effectively investigate whether this represents a sophisticated data exfiltration attempt, which monitoring approach provides the most actionable intelligence regarding the source of the activity?
Correct
Correct: Correlating NetFlow metadata with host-based execution logs allows the analyst to identify the specific binary initiating the connection. This provides the necessary context to determine if the activity aligns with authorized administrative tasks.
Incorrect: Performing a reverse DNS lookup might only reveal a generic cloud hosting provider and does not identify the internal source of the traffic. Simply conducting an increase in the logging level on a stateful firewall provides more connection details but lacks the visibility into the internal host processes required for attribution. The strategy of comparing traffic volume against a domain controller is irrelevant because the activity is originating from a database server with different traffic patterns and functional roles.
Incorrect
Correct: Correlating NetFlow metadata with host-based execution logs allows the analyst to identify the specific binary initiating the connection. This provides the necessary context to determine if the activity aligns with authorized administrative tasks.
Incorrect: Performing a reverse DNS lookup might only reveal a generic cloud hosting provider and does not identify the internal source of the traffic. Simply conducting an increase in the logging level on a stateful firewall provides more connection details but lacks the visibility into the internal host processes required for attribution. The strategy of comparing traffic volume against a domain controller is irrelevant because the activity is originating from a database server with different traffic patterns and functional roles.
-
Question 12 of 19
12. Question
A United States-based financial services firm is reviewing its web application security to ensure compliance with the SEC’s cybersecurity risk management requirements. During a vulnerability assessment, the team identifies several instances where user-supplied data is directly concatenated into database queries. Which approach is most appropriate for remediating these injection flaws while maintaining alignment with federal cybersecurity standards?
Correct
Correct: Implementing parameterized queries, also known as prepared statements, ensures that the database engine treats user input strictly as data rather than executable code. This approach addresses the root cause of injection flaws. By integrating this into an SDLC aligned with NIST SP 800-53, the organization fulfills federal requirements for System and Information Integrity (SI) and secure software development, providing a robust defense-in-depth strategy that satisfies US regulatory expectations for protecting sensitive financial data.
Incorrect: The strategy of relying on perimeter defenses like firewalls as a primary solution is insufficient because it fails to address the underlying vulnerability in the application code and can be bypassed by encrypted traffic or novel attack patterns. Simply conducting client-side sanitization is ineffective because attackers can easily circumvent browser-based controls using intercepting proxies to send malicious payloads directly to the server. Opting for database-level triggers to block specific keywords is a reactive approach that is easily defeated by SQL obfuscation techniques and does not meet the standard for proactive secure coding practices required by US compliance frameworks.
Takeaway: Effective injection prevention requires server-side parameterized queries and robust input validation integrated into a formal secure development framework.
Incorrect
Correct: Implementing parameterized queries, also known as prepared statements, ensures that the database engine treats user input strictly as data rather than executable code. This approach addresses the root cause of injection flaws. By integrating this into an SDLC aligned with NIST SP 800-53, the organization fulfills federal requirements for System and Information Integrity (SI) and secure software development, providing a robust defense-in-depth strategy that satisfies US regulatory expectations for protecting sensitive financial data.
Incorrect: The strategy of relying on perimeter defenses like firewalls as a primary solution is insufficient because it fails to address the underlying vulnerability in the application code and can be bypassed by encrypted traffic or novel attack patterns. Simply conducting client-side sanitization is ineffective because attackers can easily circumvent browser-based controls using intercepting proxies to send malicious payloads directly to the server. Opting for database-level triggers to block specific keywords is a reactive approach that is easily defeated by SQL obfuscation techniques and does not meet the standard for proactive secure coding practices required by US compliance frameworks.
Takeaway: Effective injection prevention requires server-side parameterized queries and robust input validation integrated into a formal secure development framework.
-
Question 13 of 19
13. Question
A security audit of a US-based brokerage firm’s trading platform reveals that a user can bypass the 4:00 PM EST trading cutoff by modifying the local system clock on their workstation before submitting a trade cancellation request. The application’s workflow accepts the client-provided timestamp to determine if the cancellation is permissible under internal compliance policies. Which category of vulnerability does this represent, and how should the firm’s development team address it?
Correct
Correct: This scenario describes a business logic flaw where the application’s design incorrectly trusts client-side data to enforce a critical regulatory and operational rule. By moving the validation to the server and using server-generated time, the firm ensures that the business logic cannot be subverted by end-user manipulation, maintaining compliance with US financial standards and ensuring the integrity of the trading window.
Incorrect: Relying on client-side regex filters only addresses the format of the data rather than the integrity or truthfulness of the value provided. The strategy of decreasing session timeouts might reduce the window of opportunity but fails to address the underlying logic error that allows timestamp manipulation. Opting for a Web Application Firewall to detect parameter pollution is a network-layer defense that does not fix the fundamental design flaw in how the application processes business-specific workflow rules.
Takeaway: Business logic flaws must be mitigated by enforcing all critical workflow rules and temporal constraints on the server side.
Incorrect
Correct: This scenario describes a business logic flaw where the application’s design incorrectly trusts client-side data to enforce a critical regulatory and operational rule. By moving the validation to the server and using server-generated time, the firm ensures that the business logic cannot be subverted by end-user manipulation, maintaining compliance with US financial standards and ensuring the integrity of the trading window.
Incorrect: Relying on client-side regex filters only addresses the format of the data rather than the integrity or truthfulness of the value provided. The strategy of decreasing session timeouts might reduce the window of opportunity but fails to address the underlying logic error that allows timestamp manipulation. Opting for a Web Application Firewall to detect parameter pollution is a network-layer defense that does not fix the fundamental design flaw in how the application processes business-specific workflow rules.
Takeaway: Business logic flaws must be mitigated by enforcing all critical workflow rules and temporal constraints on the server side.
-
Question 14 of 19
14. Question
A compliance review at a large investment firm in the United States has identified significant risks regarding long-standing administrative credentials in their cloud infrastructure. The Chief Information Security Officer (CISO) requires a new policy that addresses the risk of credential theft while meeting SEC requirements for safeguarding customer records. The policy must ensure that administrative rights are not persistent and are granted only under specific, verified conditions.
Correct
Correct: Implementing Just-in-Time access ensures that administrative privileges are only active during a specific window of need, which directly addresses the CISO’s requirement for non-persistent rights and aligns with SEC guidelines for protecting sensitive data.
Incorrect
Correct: Implementing Just-in-Time access ensures that administrative privileges are only active during a specific window of need, which directly addresses the CISO’s requirement for non-persistent rights and aligns with SEC guidelines for protecting sensitive data.
-
Question 15 of 19
15. Question
Your team is drafting a secure coding policy for a financial institution listed on the New York Stock Exchange to meet SEC cybersecurity risk management requirements. A key unresolved point in the policy involves the mandatory standard for preventing injection attacks within the firm’s Python and Java-based web applications. To align with NIST guidance on application security, the policy must specify the most effective method for handling user-supplied data in database queries.
Correct
Correct: Parameterized queries, also known as prepared statements, are the most effective defense against SQL injection because they ensure the database treats user input strictly as data rather than executable code. This approach aligns with NIST SP 800-53 controls for system and information integrity, which are foundational for US financial institutions maintaining SEC compliance.
Incorrect: Relying solely on client-side validation is a significant security flaw because attackers can easily bypass browser-based controls using intercepting proxies. The strategy of depending on a Web Application Firewall provides a useful layer of defense-in-depth but fails to address the root cause of the vulnerability within the application code itself. Opting for a blacklist of prohibited characters is generally ineffective as attackers can use various encoding techniques or alternative syntax to circumvent simple pattern-matching filters.
Takeaway: Using parameterized queries is the primary technical control for preventing injection vulnerabilities in modern software development frameworks.
Incorrect
Correct: Parameterized queries, also known as prepared statements, are the most effective defense against SQL injection because they ensure the database treats user input strictly as data rather than executable code. This approach aligns with NIST SP 800-53 controls for system and information integrity, which are foundational for US financial institutions maintaining SEC compliance.
Incorrect: Relying solely on client-side validation is a significant security flaw because attackers can easily bypass browser-based controls using intercepting proxies. The strategy of depending on a Web Application Firewall provides a useful layer of defense-in-depth but fails to address the root cause of the vulnerability within the application code itself. Opting for a blacklist of prohibited characters is generally ineffective as attackers can use various encoding techniques or alternative syntax to circumvent simple pattern-matching filters.
Takeaway: Using parameterized queries is the primary technical control for preventing injection vulnerabilities in modern software development frameworks.
-
Question 16 of 19
16. Question
A United States financial services corporation is upgrading its wireless infrastructure to comply with NIST cybersecurity guidelines. The organization requires a solution that provides the strongest possible mutual authentication to protect against rogue access points and credential theft. Which configuration should the network administrator implement to achieve this level of security in a WPA3-Enterprise environment?
Correct
Correct: EAP-TLS is considered the gold standard in US enterprise security frameworks because it requires digital certificates on both the client and the authentication server. This mutual authentication ensures that the client only connects to a verified network and the network only accepts verified devices, which effectively prevents man-in-the-middle attacks and credential harvesting.
Incorrect: Relying on WPA3-SAE is unsuitable for large-scale enterprise environments because it uses a shared password mechanism that lacks the granular control and individual accountability required by US regulatory standards. The strategy of using PEAP with EAP-MSCHAPv2 is less secure because it relies on user passwords which are vulnerable to dictionary attacks if the server certificate validation is bypassed by the user. Opting for EAP-TTLS with PAP as the inner authentication method is highly discouraged as it transmits credentials in an unencrypted format within the secure tunnel, creating a significant security weakness.
Takeaway: Mutual certificate-based authentication via EAP-TLS provides the highest level of security for enterprise wireless networks.
Incorrect
Correct: EAP-TLS is considered the gold standard in US enterprise security frameworks because it requires digital certificates on both the client and the authentication server. This mutual authentication ensures that the client only connects to a verified network and the network only accepts verified devices, which effectively prevents man-in-the-middle attacks and credential harvesting.
Incorrect: Relying on WPA3-SAE is unsuitable for large-scale enterprise environments because it uses a shared password mechanism that lacks the granular control and individual accountability required by US regulatory standards. The strategy of using PEAP with EAP-MSCHAPv2 is less secure because it relies on user passwords which are vulnerable to dictionary attacks if the server certificate validation is bypassed by the user. Opting for EAP-TTLS with PAP as the inner authentication method is highly discouraged as it transmits credentials in an unencrypted format within the secure tunnel, creating a significant security weakness.
Takeaway: Mutual certificate-based authentication via EAP-TLS provides the highest level of security for enterprise wireless networks.
-
Question 17 of 19
17. Question
A financial services corporation in the United States is reviewing its Public Key Infrastructure (PKI) to meet updated U.S. Securities and Exchange Commission (SEC) data integrity standards. The audit reveals that the current system for checking certificate validity relies on periodic downloads of comprehensive lists, which is causing network congestion and delayed transaction processing. The IT department needs a solution that provides immediate, per-certificate status updates without the overhead of transferring entire databases of revoked serial numbers. Which protocol should the infrastructure team implement to resolve these performance issues while maintaining real-time validation?
Correct
Correct: The Online Certificate Status Protocol allows for real-time verification of a single certificate’s status by sending a request to a responder, which returns a concise status. This minimizes bandwidth and processing time compared to traditional methods, ensuring compliance with SEC performance and security expectations.
Incorrect: Relying solely on Certificate Revocation Lists forces the client to download a complete file of all revoked certificates, which grows over time and creates significant latency. Simply conducting identity verification through a Registration Authority is limited to the issuance phase and does not address real-time validity during a transaction. Opting for the Certificate Signing Request process only manages the initial application for a certificate rather than providing a mechanism for ongoing status verification.
Incorrect
Correct: The Online Certificate Status Protocol allows for real-time verification of a single certificate’s status by sending a request to a responder, which returns a concise status. This minimizes bandwidth and processing time compared to traditional methods, ensuring compliance with SEC performance and security expectations.
Incorrect: Relying solely on Certificate Revocation Lists forces the client to download a complete file of all revoked certificates, which grows over time and creates significant latency. Simply conducting identity verification through a Registration Authority is limited to the issuance phase and does not address real-time validity during a transaction. Opting for the Certificate Signing Request process only manages the initial application for a certificate rather than providing a mechanism for ongoing status verification.
-
Question 18 of 19
18. Question
A financial services firm based in the United States is migrating its core trading applications to a public Infrastructure as a Service (IaaS) environment to improve scalability. To ensure compliance with U.S. Securities and Exchange Commission (SEC) cybersecurity risk management requirements, the firm’s IT audit team is reviewing the cloud service agreement. The audit identifies a need to clarify which party is responsible for specific security controls under the shared responsibility model. Which of the following security functions is the firm’s sole responsibility in this IaaS deployment?
Correct
Correct: In an Infrastructure as a Service (IaaS) model, the cloud service provider is responsible for the security of the underlying infrastructure, including the physical facilities and the virtualization layer. The customer, such as this U.S. financial firm, is responsible for the security of everything they place on that infrastructure, which includes the guest operating system, middleware, and applications. This alignment ensures compliance with SEC expectations for maintaining operational control over the software stack and data protection.
Incorrect: Relying on the provider for physical hardware and environmental controls is incorrect because these are always the provider’s responsibility in public cloud models. The strategy of expecting the provider to manage the hypervisor or virtualization software is misplaced in an IaaS context, as the provider maintains that layer to offer the service. Choosing to delegate physical network cabling or storage hardware maintenance to the customer is impossible in a public cloud environment where the customer has no physical access to the provider’s facilities.
Takeaway: In IaaS, the customer is responsible for securing the guest operating system and all software layers above it.
Incorrect
Correct: In an Infrastructure as a Service (IaaS) model, the cloud service provider is responsible for the security of the underlying infrastructure, including the physical facilities and the virtualization layer. The customer, such as this U.S. financial firm, is responsible for the security of everything they place on that infrastructure, which includes the guest operating system, middleware, and applications. This alignment ensures compliance with SEC expectations for maintaining operational control over the software stack and data protection.
Incorrect: Relying on the provider for physical hardware and environmental controls is incorrect because these are always the provider’s responsibility in public cloud models. The strategy of expecting the provider to manage the hypervisor or virtualization software is misplaced in an IaaS context, as the provider maintains that layer to offer the service. Choosing to delegate physical network cabling or storage hardware maintenance to the customer is impossible in a public cloud environment where the customer has no physical access to the provider’s facilities.
Takeaway: In IaaS, the customer is responsible for securing the guest operating system and all software layers above it.
-
Question 19 of 19
19. Question
A United States-based brokerage firm is developing a new mobile application for retail investors to trade equities. During the final phase of the Secure Software Development Life Cycle (SDLC), a static analysis security testing (SAST) tool identifies a hardcoded cryptographic key used for local data storage. Given the regulatory environment overseen by the United States Securities and Exchange Commission (SEC) and the need to follow NIST guidelines, what is the best next step for the development team?
Correct
Correct: Using FIPS 140-2 validated modules aligns with NIST standards and United States federal requirements for protecting sensitive financial information. Implementing a dedicated key management service ensures that cryptographic keys are rotated and stored securely, rather than being exposed in the source code.
Incorrect
Correct: Using FIPS 140-2 validated modules aligns with NIST standards and United States federal requirements for protecting sensitive financial information. Implementing a dedicated key management service ensures that cryptographic keys are rotated and stored securely, rather than being exposed in the source code.