Which statement exemplifies Salesforces honesty guideline when training AI models?
A. Minimize the AI models carbon footprint and environment impact during training.
B. Ensure appropriate consent and transparency when using AI-generated responses.
C. Control bias, toxicity, and harmful content with embedded guardrails and guidance.
Explanation:
“Ensuring appropriate consent and transparency when using AI-generated responses is a statement that exemplifies Salesforce’s honesty guideline when training AI models. Salesforce’s honesty guideline is one of the Trusted AI Principles that states that AI systems should be designed and developed with respect for honesty and integrity in how they work and what they produce.
Ensuring appropriate consent and transparency means respecting and honoring the choices and preferences of users regarding how their data is used or generated by AI systems. Ensuring appropriate consent and transparency also means providing clear and accurate information and documentation about the AI systems and their outputs.”
What is the key difference between generative and predictive AI?
A. Generative AI creates new content based on existing data and predictive AI analyzes existing data.
B. Generative AI finds content similar to existing data and predictive AI analyzes existing data.
C. Generative AI analyzes existing data and predictive AI creates new content based on existing data.
Explanation:
The key difference between generative and predictive AI is that generative AI creates new content based on existing data and predictive AI analyzes existing data. Generative AI is a type of AI that can generate novel content such as images, text, music, or video based on existing data or inputs. Predictive AI is a type of AI that can analyze existing data or inputs and make predictions or recommendations based on patterns or trends.”
Which action introduces bias in the training data used for AI algorithms?
A. Using a large dataset that is computationally expensive
B. Using a dataset that represents diverse perspectives and populations
C. Using a dataset that underrepresents perspectives and populations
Explanation: Introducing bias in training data for AI algorithms occurs when the dataset used underrepresents certain perspectives and populations. This type of bias can skew AI predictions, making the system less fair and accurate.
For example, if a dataset predominantly contains information from one demographic group, the AI's performance may not generalize well to other groups, leading to biased or unfair outcomes. Salesforce discusses the impact of biased training data and ways to mitigate this in their AI ethics guidelines, which can be explored further in the Salesforce AI documentation on Responsible Creation of AI.
Cloud Kicks learns of complaints from customers who are receiving too many sales calls and emails. Which data quality dimension should be assessed to reduce these communication Inefficiencies?
A. Duplication
B. Usage
C. Consent
Explanation: “Duplication is the data quality dimension that should be assessed to reduce communication inefficiencies. Duplication means that the data contains multiple copies or instances of the same record or value. Duplication can cause confusion, errors, or waste in data analysis and processing. For example, duplication can lead to communication inefficiencies if customers receive multiple calls or emails from different sources for the same purpose.”
Cloud Kicks wants to ensure that multiple records for the same customer are removed in Salesforce. Which feature should be used to accomplish this?
A. Duplicate management
B. Trigger deletion of old records
C. Standardized field names
Explanation: “Duplicate management should be used to remove multiple records for the same customer in Salesforce. Duplicate management is a feature that helps prevent and manage duplicate records in Salesforce. Duplicate management can help define matching rules, duplicate rules, and alert messages to detect and merge duplicate records.”
Page 2 out of 22 Pages |
Previous |