Available in 1, 3, 6 and 12 Months Free Updates Plans
PDF: $15 $60

Test Engine: $20 $80

PDF + Engine: $25 $99

Salesforce-AI-Specialist Practice Test


Page 6 out of 20 Pages

Universal Containers plans to implement prompt templates that utilize the standard foundation models.
What should the AI Specialist consider when building prompt templates in Prompt Builder?


A. Include multiple-choice questions within the prompt to test the LLM's understanding of the context.


B. Ask it to role-play as a character in the prompt template to provide more context to the LLM.


C. Train LLM with data using different writing styles including word choice, intensifiers, emojis, and punctuation.





C.
  Train LLM with data using different writing styles including word choice, intensifiers, emojis, and punctuation.

Explanation: When buildingprompt templates in Prompt Builder, it is essential to consider how the Large Language Model (LLM) processes and generates outputs. Training the LLM with variouswriting styles, such as differentword choices, intensifiers, emojis, and punctuation, helps the model better understand diverse writing patterns and produce more contextually appropriate responses.
This approach enhances the flexibility and accuracy of the LLM when generating outputs for different use cases, as it is trained to recognize various writing conventions and styles. The prompt template should focus on providing rich context, and this stylistic variety helps improve the model’s adaptability.
Options A and B are less relevant because adding multiple-choice questions or role-playing scenarios doesn’t contribute significantly to improving the AI’s output generation quality within standard business contexts.
For more details, refer to Salesforce’sPrompt Builder documentationand LLM tuning strategies.

Universal Containers (UC) wants to create a new Sales Email prompt template in Prompt Builder using the "Save As" function. However, UC notices that the new template produces different results compared to the standard Sales Email prompt due to missing hyperparameters.
What should UC do to ensure the new prompt template produces results comparable to the standard Sales Email prompts?


A. Use Model Playground to create a model configuration with the specified parameters.


B. Manually add the hyperparameters to the new template.


C. Revert to using the standard template without modifications.





B.
  Manually add the hyperparameters to the new template.

Explanation: WhenUniversal Containerscreates a new Sales Email prompt template using the"Save As"function, missing hyperparameters can result in different outputs. To ensure the new prompt produces comparable results to the standard Sales Email prompt, the AI Specialist shouldmanually add the necessary hyperparametersto the new template.
Hyperparameters likeTemperature,Frequency Penalty, andPresence Penaltydirectly affect how the AI generates responses. Ensuring that these are consistent with the standard template will result in similar outputs.
Option A (Model Playground)is not necessary here, as it focuses on fine-tuning models, not adjusting templates directly.
Option C (Reverting to the standard template)does not solve the issue of customizing the prompt template.
For more information, refer toPrompt Builder documentationon configuring hyperparameters in custom templates.

When configuring a prompt template, an AI Specialist previews the results of the prompt template they've written. They see two distinct text outputs: Resolution and Response.
Which information does the Resolution text provide?


A. It shows the full text that is sent to the Trust Layer.


B. It shows the response from the LLM based on the sample record.


C. It shows which sensitive data is masked before it is sent to the LLM.





B.
  It shows the response from the LLM based on the sample record.

Explanation: When previewing aprompt templatein Salesforce, theResolutiontext provides theresponse from the LLM(Large Language Model) based on the data from a sample record. This output shows what the AI model generated in response to the prompt, giving the AI Specialist a chance to review and adjust the response before finalizing the template.
Option Bis correct becauseResolutiondisplays the actual response generated by the LLM.
Option Arefers to sending the text to theTrust Layer, but that’s not whatResolutionrepresents.
Option Crelates to data masking, which is shown elsewhere, not underResolution.

Universal Containers (UC) wants to offer personalized service experiences and reduce agent handling time with Al-generated email responses, grounded in Knowledge base.
Which AI capability should UC use?


A. Einstein Email Replies


B. Einstein Service Replies for Email


C. Einstein Generative Service Replies for Email





B.
  Einstein Service Replies for Email

Explanation: ForUniversal Containers (UC)to offer personalized service experiences and reduce agent handling time using AI-generated responses grounded in theKnowledge base, the best solution isEinstein Service Replies for Email. This capability leverages AI to automatically generate responses to service-related emails based on historical data and theKnowledge base, ensuring accuracy and relevance while saving time for service agents.
Einstein Email Replies(option A) is more suited for sales use cases.
Einstein Generative Service Replies for Email(option C) could be a future offering, but as of now,Einstein Service Replies for Emailis the correct choice for grounded, knowledge-based responses.

An Al Specialist is tasked with configuring a generative model to create personalized sales emails using customer data stored in Salesforce. The AI Specialist has already fine-tuned a large language model (LLM) on the OpenAI platform.
Security and data privacy are critical concerns for the client.
How should the AI Specialist integrate the custom LLM into Salesforce?


A. Create an application of the custom LLM and embed it in Sales Cloud via iFrame.


B. Add the fine-tuned LLM in Einstein Studio Model Builder.


C. Enable model endpoint on OpenAl and make callouts to the model to generate emails.





B.
  Add the fine-tuned LLM in Einstein Studio Model Builder.

Explanation: Since security and data privacy are critical, the best option for the AI Specialist is to integrate the fine-tunedLLM (Large Language Model)into Salesforce by adding it toEinstein Studio Model Builder.Einstein Studioallows organizations to bring their own AI models (BYOM), ensuring the model is securely managed within Salesforce’s environment, adhering to data privacy standards.
Option A(embedding via iFrame) is less secure and doesn’t integrate deeply with Salesforce's data and security models.
Option C(making callouts to OpenAI) raises concerns about data privacy, as sensitive Salesforce data would be sent to an external system.
Einstein Studioprovides the most secure and seamless way to integrate custom AI models while maintaining control over data privacy and compliance. More details can be found inSalesforce's Einstein Studio documentationon integrating external models.


Page 6 out of 20 Pages
Previous