Summer Limited Time 60% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: wrap60

Isaca AAISM Dumps

Page: 1 / 9
Total 90 questions

ISACA Advanced in AI Security Management (AAISM) Exam Questions and Answers

Question 1

An attacker crafts inputs to a large language model (LLM) to exploit output integrity controls. Which of the following types of attacks is this an example of?

Options:

A.

Prompt injection

B.

Jailbreaking

C.

Remote code execution

D.

Evasion

Question 2

The PRIMARY benefit of implementing moderation controls in generative AI applications is that it can:

Options:

A.

Increase the model’s ability to generate diverse and creative content

B.

Optimize the model’s response time

C.

Ensure the generated content adheres to privacy regulations

D.

Filter out harmful or inappropriate content

Question 3

When integrating AI for innovation, which of the following can BEST help an organization manage security risk?

Options:

A.

Re-evaluating the risk appetite

B.

Seeking third-party advice

C.

Evaluating compliance requirements

D.

Adopting a phased approach

Question 4

After deployment, an AI model’s output begins to drift outside of the expected range. Which of the following is the development team’s BEST course of action?

Options:

A.

Take the AI model offline

B.

Adjust the hyperparameters of the AI model

C.

Create an emergency change request to correct the issue

D.

Return to an earlier phase in the AI life cycle

Question 5

Which of the following AI-driven systems should have the MOST stringent recovery time objective (RTO)?

Options:

A.

Health support system

B.

Credit risk modeling system

C.

Car navigation system

D.

Industrial control system

Question 6

As organizations increasingly rely on vendors to develop AI systems, which of the following is the MOST effective way to monitor vendors and ensure compliance with ethical and security standards?

Options:

A.

Conducting regular audits of vendor processes and adherence to AI development guidelines

B.

Requiring vendors to monitor their adherence to ethics and security standards

C.

Mandating that vendors share source code and AI documentation with the contracting party

D.

Allowing vendors to self-attest ethical AI compliance and implement benchmark monitoring

Question 7

Which of the following is the BEST reason to immediately disable an AI system?

Options:

A.

Excessive model drift

B.

Slow model performance

C.

Overly detailed model outputs

D.

Insufficient model training

Question 8

When an attacker uses synthetic data to reverse engineer an organization’s AI model, it is an example of which of the following types of attack?

Options:

A.

Distillation

B.

Inversion

C.

Prompt

D.

Poisoning

Question 9

Which of the following is the GREATEST benefit of implementing an AI tool to safeguard sensitive data and prevent unauthorized access?

Options:

A.

Timely analysis of endpoint activities

B.

Timely initiation of incident response

C.

Reduced number of false positives

D.

Reduced need for data classification

Question 10

A large pharmaceutical company using a new AI solution to develop treatment regimens is concerned about potential hallucinations with the introduction of real-world data. Which of the following is MOST likely to reduce this risk?

Options:

A.

Penetration testing

B.

Human-in-the-loop

C.

AI impact analysis

D.

Data asset validation

Question 11

Which of the following metrics BEST evaluates the ability of a model to correctly identify all true positive instances?

Options:

A.

F1 score

B.

Recall

C.

Precision

D.

Specificity

Question 12

Which of the following is the MOST important consideration when deciding how to compose an AI red team?

Options:

A.

Resource availability

B.

AI use cases

C.

Time-to-market constraints

D.

Compliance requirements

Question 13

Which of the following MOST effectively minimizes the attack surface when securing AI agent components during their development and deployment?

Options:

A.

Deploy pre-trained models directly into production.

B.

Consolidate event logs for correlation and centralized analysis.

C.

Schedule periodic manual code reviews.

D.

Implement compartmentalization with least privilege enforcement.

Question 14

The PRIMARY ethical concern of generative AI is that it may:

Options:

A.

Produce unexpected data that could lead to bias

B.

Cause information integrity issues

C.

Cause information to become unavailable

D.

Breach the confidentiality of information

Question 15

Which of the following recommendations would BEST help a service provider mitigate the risk of lawsuits arising from generative AI’s access to and use of internet data?

Options:

A.

Activate filtering logic to exclude intellectual property flags

B.

Disclose service provider policies to declare compliance with regulations

C.

Appoint a data steward specialized in AI to strengthen security governance

D.

Review log information that records how data was collected

Question 16

Which of the following is a key risk indicator (KRI) for an AI system used for threat detection?

Options:

A.

Number of training epochs

B.

Training time of the model

C.

Number of layers in the neural network

D.

Number of system overrides by cyber analysts

Question 17

Which of the following is the BEST mitigation control for membership inference attacks on AI systems?

Options:

A.

Model ensemble techniques

B.

AI threat modeling

C.

Differential privacy

D.

Cybersecurity-oriented red teaming

Question 18

Which of the following is the GREATEST risk inherent to implementing generative AI?

Options:

A.

Lack of employee training

B.

Unidentified asset vulnerabilities

C.

Inadequate return on investment (ROI)

D.

Potential intellectual property violations

Question 19

To ensure AI tools do not jeopardize ethical principles, it is MOST important to validate that:

Options:

A.

The organization has implemented a responsible development policy

B.

Outputs of AI tools do not perpetuate adverse biases

C.

Stakeholders have approved alignment with company values

D.

AI tools are evaluated by the privacy department before implementation

Question 20

When documenting information about machine learning (ML) models, which of the following artifacts BEST helps enhance stakeholder trust?

Options:

A.

Hyperparameters

B.

Data quality controls

C.

Model card

D.

Model prototyping

Question 21

Which of the following is the MOST important course of action when implementing continuous monitoring and reporting for AI-based systems?

Options:

A.

Establish an automated alert system for threshold breaches in risk metrics

B.

Develop standardized risk reporting templates for different stakeholder groups

C.

Implement real-time monitoring of key risk indicators (KRIs) for AI systems

D.

Implement a risk dashboard for visualizing and tracking AI-related risk over time

Question 22

An organization uses an AI tool to scan social media for product reviews. Fraudulent social media accounts begin posting negative reviews attacking the organization's product. Which type of AI attack is MOST likely to have occurred?

Options:

A.

Model inversion

B.

Deepfake

C.

Availability attack

D.

Data poisoning

Question 23

An organization recently introduced a generative AI chatbot that can interact with users and answer their queries. Which of the following would BEST mitigate hallucination risk identified by the risk team?

Options:

A.

Performing model testing and validation

B.

Training the foundational model on large data sets

C.

Ensuring model developers have been trained in AI risk

D.

Fine-tuning the foundational model

Question 24

Which of the following technologies can be used to manage deepfake risk?

Options:

A.

Systematic data tagging

B.

Multi-factor authentication (MFA)

C.

Blockchain

D.

Adaptive authentication

Question 25

Which of the following will BEST reduce data bias in machine learning (ML) algorithms?

Options:

A.

Adopting a more simplified model

B.

Utilizing unstructured data sets

C.

Diversifying the model training data

D.

Securing the model training data

Question 26

Which of the following AI system vulnerabilities is MOST easily exploited by adversaries?

Options:

A.

Inaccurate generalizations from new data by the AI model

B.

Weak controls for access to the AI model

C.

Lack of protection against denial of service (DoS) attacks

D.

Inability to detect input modifications causing inappropriate AI outputs

Question 27

An AI research team is developing a natural language processing model that relies on several open-source libraries. Which of the following is the team’s BEST course of action to ensure the integrity of the software packages used?

Options:

A.

Maintain a list of frequently used libraries to ensure consistent application in projects

B.

Scan the packages and libraries for malware prior to installation

C.

Use the latest version of all libraries from public repositories

D.

Retrain the model regularly to handle package and library updates

Page: 1 / 9
Total 90 questions