Topic 1: Part 1
Which of the following best defines an "Al model"?
A. A system that applies defined rules to execute tasks.
B. A system of controls that is used to govern an Al algorithm.
C. A corpus of data which an Al algorithm analyzes to make predictions.
D. A program that has been trained on a set of data to find patterns within the data.
Explanation: An AI model is best defined as a program that has been trained on a set of data to find patterns within that data. This definition captures the essence of machine learning, where the model learns from the data to make predictions or decisions. Reference: AIGP BODY OF KNOWLEDGE, which provides a detailed explanation of AI models and their training processes.
According to the Singapore Model Al Governance Framework, all of the following are recommended measures to promote the responsible use of Al EXCEPT?
A. Determining the level of human involvement in algorithmic decision-making.
B. Adapting the existing governance structure algorithmic decision-making.
C. Employing human-over-the-loop protocols for high-risk systems.
D. Establishing communications and collaboration among stakeholders.
Explanation: The Singapore Model AI Governance Framework recommends several measures to promote the responsible use of AI, such as determining the level of human involvement in decision-making, adapting governance structures, and establishing communications and collaboration among stakeholders. However, employing human-over-the-loop protocols is not specifically mentioned in this framework. The focus is more on integrating human oversight appropriately within the decision-making process rather than exclusively employing such protocols. Reference: AIGP Body of Knowledge, section on AI governance frameworks.
According to the GDPR's transparency principle, when an Al system processes personal data in automated decision-making, controllers are required to provide data subjects specific information on?
A. The existence of automated decision-making and meaningful information on its logic and consequences.
B. The personal data used during processing, including inferences drawn by the Al system about the data.
C. The data protection impact assessments carried out on the Al system and legal bases for processing.
D. The contact details of the data protection officer and the data protection national authority.
Explanation: The GDPR's transparency principle requires that when personal data is processed for automated decision-making, including profiling, data subjects must be informed about the existence of such automated decision-making. Additionally, they must be provided with meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for them. This requirement ensures that data subjects are fully aware of how their personal data is being used and the potential impacts, thereby promoting transparency and trust in the processing activities.
The OECD's Ethical Al Governance Framework is a self-regulation model that proposes to prevent societal harms by?
A. Establishing explain ability criteria to responsibly source and use data to train Al systems.
B. Defining requirements specific to each industry sector and high-risk Al domain.
C. Focusing on Al technical design and post-deployment monitoring.
D. Balancing Al innovation with ethical considerations.
Explanation: The OECD's Ethical AI Governance Framework aims to ensure that AI development and deployment are carried out ethically while fostering innovation. The framework includes principles like transparency, accountability, and human rights protections to prevent societal harm. It does not focus solely on technical design or post-deployment monitoring (C), nor does it establish industry-specific requirements (B). While explainability is important, the primary goal is to balance innovation with ethical considerations (D).
Which of the following is an example of a high-risk application under the EU Al Act?
A. A resume scanning tool that ranks applicants.
B. An Al-enabled inventory management tool.
C. A government-run social scoring tool.
D. A customer service chatbot tool.
Explanation: The EU AI Act categorizes certain applications of AI as high-risk due to their potential impact on fundamental rights and safety. High-risk applications include those used in critical areas such as employment, education, and essential public services. A government-run social scoring tool, which assesses individuals based on their social behavior or perceived trustworthiness, falls under this category because of its profound implications for privacy, fairness, and individual rights. This contrasts with other AI applications like resume scanning tools or customer service chatbots, which are generally not classified as high-risk under the EU AI Act.
Page 3 out of 20 Pages |
Previous |