Topic 1: Part 1
CASE STUDY
Please use the following answer the next question:
Good Values Corporation (GVC) is a U.S. educational services provider that employs
teachers to create and deliver enrichment courses for high school students. GVC has
learned that many of its teacher employees are using generative Al to create the
enrichment courses, and that many of the students are using generative Al to complete
their assignments.
In particular, GVC has learned that the teachers they employ used open source large
language models (“LLM”) to develop an online tool that customizes study questions for
individual students. GVC has also discovered that an art teacher has expressly
incorporated the use of generative Al into the curriculum to enable students to use prompts
to create digital art.
GVC has started to investigate these practices and develop a process to monitor any use
of generative Al, including by teachers and students, going forward.
All of the following may be copyright risks from teachers using generative Al to create
course content EXCEPT?
A. Content created by an LLM may be protectable under U.S. intellectual property law.
B. Generative Al is generally trained using intellectual property owned by third parties.
C. Students must expressly consent to this use of generative Al.
D. Generative Al often creates content without attribution.
Explanation: All of the options listed may pose copyright risks when teachers use
generative AI to create course content, except for students must expressly consent to this
use of generative AI. While obtaining student consent is essential for ethical and privacy
reasons, it does not directly relate to copyright risks associated with the creation and use of
AI-generated content.
Reference: The AIGP Body of Knowledge discusses the importance of addressing
intellectual property (IP) risks when using AI-generated content. Copyright risks are
typically associated with the use of third-party data and the lack of attribution, rather than
the consent of users.
CASE STUDY
Please use the following answer the next question:
ABC Corp, is a leading insurance provider offering a range of coverage options to
individuals. ABC has decided to utilize artificial intelligence to streamline and improve its
customer acquisition and underwriting process, including the accuracy and efficiency of
pricing policies.
ABC has engaged a cloud provider to utilize and fine-tune its pre-trained, general purpose
large language model (“LLM”). In particular, ABC intends to use its historical customer
data—including applications, policies, and claims—and proprietary pricing and risk
strategies to provide an initial qualification assessment of potential customers, which would
then be routed tA. human underwriter for final review.
ABC and the cloud provider have completed training and testing the LLM, performed a
readiness assessment, and made the decision to deploy the LLM into production. ABC has
designated an internal compliance team to monitor the model during the first month,
specifically to evaluate the accuracy, fairness, and reliability of its output. After the first
month in production, ABC realizes that the LLM declines a higher percentage of women's
loan applications due primarily to women historically receiving lower salaries than men.
Each of the following steps would support fairness testing by the compliance team during
the first month in production EXCEPT?
A. Validating a similar level of decision-making across different demographic groups.
B. Providing the loan applicants with information about the model capabilities and limitations.
C. Identifying if additional training data should be collected for specific demographic groups.
D. Using tools to help understand factors that may account for differences in decisionmaking.
Explanation:
Providing the loan applicants with information about the model capabilities and limitations
would not directly support fairness testing by the compliance team. Fairness testing
focuses on evaluating the model's decisions for biases and ensuring equitable treatment
across different demographic groups, rather than informing applicants about the model.
Reference: The AIGP Body of Knowledge outlines that fairness testing involves technical
assessments such as validating decision-making consistency across demographics and using tools to understand decision factors. While transparency to applicants is important for
ethical AI use, it does not contribute directly to the technical process of fairness testing.
Random forest algorithms are in what type of machine learning model?
A. Symbolic.
B. Generative.
C. Discriminative.
D. Natural language processing.
Explanation: Random forest algorithms are classified as discriminative models.
Discriminative models are used to classify data by learning the boundaries between
classes, which is the core functionality of random forest algorithms. They are used for
classification and regression tasks by aggregating the results of multiple decision trees to
make accurate predictions.
Reference: The AIGP Body of Knowledge explains that discriminative models, including
random forest algorithms, are designed to distinguish between different classes in the data,
making them effective for various predictive modeling tasks.
CASE STUDY
Please use the following answer the next question:
ABC Corp, is a leading insurance provider offering a range of coverage options to
individuals. ABC has decided to utilize artificial intelligence to streamline and improve its
customer acquisition and underwriting process, including the accuracy and efficiency of
pricing policies.
ABC has engaged a cloud provider to utilize and fine-tune its pre-trained, general purpose
large language model (“LLM”). In particular, ABC intends to use its historical customer
data—including applications, policies, and claims—and proprietary pricing and risk
strategies to provide an initial qualification assessment of potential customers, which would then be routed tA. human underwriter for final review.
ABC and the cloud provider have completed training and testing the LLM, performed a
readiness assessment, and made the decision to deploy the LLM into production. ABC has
designated an internal compliance team to monitor the model during the first month,
specifically to evaluate the accuracy, fairness, and reliability of its output. After the first
month in production, ABC realizes that the LLM declines a higher percentage of women's
loan applications due primarily to women historically receiving lower salaries than men.
The best approach to enable a customer who wants information on the Al model's
parameters for underwriting purposes is to provide?
A. A transparency notice.
B. An opt-out mechanism.
C. Detailed terms of service.
D. Customer service support.
Explanation:
The best approach to enable a customer who wants information on the AI model's
parameters for underwriting purposes is to provide a transparency notice. This notice
should explain the nature of the AI system, how it uses customer data, and the decision making
process it follows. Providing a transparency notice is crucial for maintaining trust
and compliance with regulatory requirements regarding the transparency and accountability
of AI systems.
Reference: According to the AIGP Body of Knowledge, transparency in AI systems is
essential to ensure that stakeholders, including customers, understand how their data is
being used and how decisions are made. This aligns with ethical principles of AI
governance, ensuring that customers are informed and can make knowledgeable decisions
regarding their interactions with AI systems.
CASE STUDY
Please use the following answer the next question:
XYZ Corp., a premier payroll services company that employs thousands of people globally,
is embarking on a new hiring campaign and wants to implement policies and procedures to
identify and retain the best talent. The new talent will help the company's product team
expand its payroll offerings to companies in the healthcare and transportation sectors, including in Asia.
It has become time consuming and expensive for HR to review all resumes, and they are
concerned that human reviewers might be susceptible to bias.
Address these concerns, the company is considering using a third-party Al tool to screen
resumes and assist with hiring. They have been talking to several vendors about possibly
obtaining a third-party Al-enabled hiring solution, as long as it would achieve its goals and
comply with all applicable laws.
The organization has a large procurement team that is responsible for the contracting of
technology solutions. One of the procurement team's goals is to reduce costs, and it often
prefers lower-cost solutions. Others within the company are responsible for integrating and
deploying technology solutions into the organization's operations in a responsible, costeffective
manner.
The organization is aware of the risks presented by Al hiring tools and wants to mitigate
them. It also questions how best to organize and train its existing personnel to use the Al
hiring tool responsibly. Their concerns are heightened by the fact that relevant laws vary
across jurisdictions and continue to change.
Which other stakeholder groups should be involved in the selection and implementation of
the Al hiring tool?
A. Finance and Legal.
B. Marketing and Compliance.
C. Supply Chain and Marketing.
D. Litigation and Product Development.
Explanation: In the selection and implementation of the AI hiring tool, involving Finance and Legal is crucial. The Finance team is essential for assessing cost implications, budget considerations, and financial risks. The Legal team is necessary to ensure compliance with applicable laws and regulations, including those related to data privacy, employment, and anti-discrimination. Involving these stakeholders ensures a comprehensive evaluation of both the financial viability and legal compliance of the AI tool, mitigating potential risks and aligning with organizational objectives and regulatory requirements.
Page 1 out of 20 Pages |