Available in 1, 3, 6 and 12 Months Free Updates Plans
PDF: $15 $60

Test Engine: $20 $80

PDF + Engine: $25 $99

MuleSoft-Integration-Architect-I Practice Test


Page 7 out of 54 Pages

What is required before an API implemented using the components of Anypoint Platform can be managed and governed (by applying API policies) on Anypoint Platform?


A. The API must be published to Anypoint Exchange and a corresponding API instance ID must be obtained from API Manager to be used in the API implementation


B. The API implementation source code must be committed to a source control management system (such as GitHub)


C. A RAML definition of the API must be created in API designer so it can then be published to Anypoint Exchange


D. The API must be shared with the potential developers through an API portal so API consumers can interact with the API





A.
  The API must be published to Anypoint Exchange and a corresponding API instance ID must be obtained from API Manager to be used in the API implementation

Explanation

Context of the question is about managing and governing mule applications deployed on Anypoint platform.

Anypoint API Manager (API Manager) is a component of Anypoint Platform that enables you to manage, govern, and secure APIs. It leverages the runtime capabilities of API Gateway and Anypoint Service Mesh, both of which enforce policies, collect and track analytics data, manage proxies, provide encryption and authentication, and manage applications.

Mule Ref Doc : https://docs.mulesoft.com/api-manager/2.x/getting-started-proxy

Reference: [Reference: https://docs.mulesoft.com/api-manager/2.x/api-auto-discovery-new-concept, ]

A Mule application is running on a customer-hosted Mule runtime in an organization's network. The Mule application acts as a producer of asynchronous Mule events. Each Mule event must be broadcast to all interested external consumers outside the Mule application. The Mule events should be published in a way that is guaranteed in normal situations and also minimizes duplicate delivery in less frequent failure scenarios.

The organizational firewall is configured to only allow outbound traffic on ports 80 and 443. Some external event consumers are within the organizational network, while others are located outside the firewall.

What Anypoint Platform service is most idiomatic (used for its intended purpose) for publishing these Mule events to all external consumers while addressing the desired reliability goals?


A. CloudHub VM queues


B. Anypoint MQ


C. Anypoint Exchange


D. CloudHub Shared Load Balancer





B.
  Anypoint MQ

Explanation:

Set the Anypoint MQ connector operation to publish or consume messages, or to accept (ACK) or not accept (NACK) a message.

Reference: [Reference: https://docs.mulesoft.com/mq/, , , ]

A mule application is required to periodically process large data set from a back-end database to Salesforce CRM using batch job scope configured properly process the higher rate of records. The application is deployed to two cloudhub workers with no persistence queues enabled. What is the consequence if the worker crashes during records processing?


A. Remaining records will be processed by a new replacement worker


B. Remaining records be processed by second worker


C. Remaining records will be left and processed


D. All the records will be processed from scratch by the second worker leading to duplicate processing





D.
  All the records will be processed from scratch by the second worker leading to duplicate processing

Explanation:

When a Mule application uses batch job scope to process large datasets and is deployed on multiple CloudHub workers without persistence queues enabled, the following scenario occurs if a worker crashes:

Batch Job Scope: Batch jobs are designed to handle large datasets by splitting the work into records and processing them in parallel.

Non-Persistent Queues: When persistence is not enabled, the state of the batch processing is not stored persistently. This means that if a worker crashes, the state of the in-progress batch job is lost.

Worker Crash Consequence:

This behavior can cause issues such as duplicate data in Salesforce CRM and inefficiencies in processing.

References

MuleSoft Batch Processing

MuleSoft CloudHub Workers

A company is designing an integration Mule application to process orders by submitting them to a back-end system for offline processing. Each order will be received by the Mule application through an HTTP5 POST and must be acknowledged immediately.

Once acknowledged the order will be submitted to a back-end system. Orders that cannot be successfully submitted due to the rejections from the back-end system will need to be processed manually (outside the banking system).

The mule application will be deployed to a customer hosted runtime and will be able to use an existing ActiveMQ broker if needed. The ActiveMQ broker is located inside the organization's firewall. The back-end system has a track record of unreliability due to both minor network connectivity issues and longer outages.

Which combination of Mule application components and ActiveMQ queues are required to ensure automatic submission of orders to the back-end system while supporting but minimizing manual order processing?


A. One or more On Error scopes to assist calling the back-end system An Untill successful scope containing VM components for long retries A persistent dead-letter VM queue configure in Cloud hub


B. An Until Successful scope to call the back-end system One or more ActiveMQ long-retry queues One or more ActiveMQ dead-letter queues for manual processing


C. One or more on-Error scopes to assist calling the back-end system one or more ActiveMQ long-retry queues A persistent dead-letter Object store configuration in the CloudHub object store service


D. A batch job scope to call the back in system An Untill successful scope containing Object Store components for long retries. A dead-letter object store configured in the Mule application





B.
  An Until Successful scope to call the back-end system One or more ActiveMQ long-retry queues One or more ActiveMQ dead-letter queues for manual processing

Explanation:

To design an integration Mule application that processes orders and ensures reliability even with an unreliable back-end system, the following components and ActiveMQ queues should be used: Until Successful Scope: This scope ensures that the Mule application will continue trying to submit the order to the back-end system until it succeeds or reaches a specified retry limit. This helps in handling transient network issues or minor outages of the back-end system. ActiveMQ Long-Retry Queues: By placing the orders in long-retry queues, the application can manage retries over an extended period. This is particularly useful when the back-end system experiences longer outages. The ActiveMQ broker, located within the organization’s firewall, can reliably handle these queues.

ActiveMQ Dead-Letter Queues: Orders that cannot be successfully submitted after all retry attempts should be moved to dead-letter queues. This allows for manual processing of these orders. The dead-letter queue ensures that no orders are lost and provides a clear mechanism for handling failed submissions.

Implementation Steps:

HTTP Listener: Set up an HTTP listener to receive incoming orders.

Immediate Acknowledgment: Immediately acknowledge the receipt of the order to the client. Until Successful Scope: Use the Until Successful scope to attempt submitting the order to the back-end system. Configure retry intervals and limits.

Long-Retry Queues: Configure ActiveMQ long-retry queues to manage retries.

Dead-Letter Queues: Set up ActiveMQ dead-letter queues for orders that fail after maximum retry attempts, allowing for manual intervention.

This approach ensures that the system can handle temporary and prolonged back-end outages while minimizing manual processing.

References:

MuleSoft Documentation on Until Successful Scope: https://docs.mulesoft.com/mule-runtime/4.3/until-successful-scope

ActiveMQ Documentation: https://activemq.apache.org/

How should the developer update the logging configuration in order to enable this package specific debugging?


A. In Anypoint Monitoring, define a logging search query with class property set to org.apache.cxf and level set to DEBUG


B. In the Mule application's log4j2.xml file, add an AsyncLogger element with name property set to org.apache.cxf and level set to DEBUG, then redeploy the Mule application in the CloudHub production environment


C. In the Mule application's log4j2.xmI file, change the root logger's level property to DEBUG, then redeploy the Mule application to the CloudHub production environment


D. In Anypoint Runtime Manager, in the Deployed Application Properties tab for the Mule application, add a line item with DEBUG level for package org.apache.cxf and apply the changes





A.
  In Anypoint Monitoring, define a logging search query with class property set to org.apache.cxf and level set to DEBUG

Explanation:

To enable package-specific debugging for the org.apache.cxf package, you need to update the logging configuration in the Mule application's log4j2.xml file. The steps are as follows:

Open the log4j2.xml file in your Mule application.

Add an AsyncLogger element with the name property set to org.apache.cxf and the level set to DEBUG. This configuration specifies that only the logs from the org.apache.cxf package should be logged at the DEBUG level.

Save the changes to the log4j2.xml file.

Redeploy the updated Mule application to the CloudHub production environment to apply the new logging configuration.

This approach ensures that only the specified package's logging level is changed to DEBUG, minimizing the potential performance impact on the application.

References

MuleSoft Documentation on Configuring Logging

Log4j2 Configuration Guide


Page 7 out of 54 Pages
Previous