Available in 1, 3, 6 and 12 Months Free Updates Plans
PDF: $15 $60

Test Engine: $20 $80

PDF + Engine: $25 $99

MuleSoft-Integration-Architect-I Practice Test


Page 8 out of 54 Pages

A leading eCommerce giant will use MuleSoft APIs on Runtime Fabric (RTF) to process customer orders. Some customer-sensitive information, such as credit card information, is required in request payloads or is included in response payloads in some of the APIs. Other API requests and responses are not authorized to access some of this customer-sensitive information but have been implemented to validate and transform based on the structure and format of this customer-sensitive information (such as account IDs, phone numbers, and postal codes).

What approach configures an API gateway to hide sensitive data exchanged between API consumers and API implementations, but can convert tokenized fields back to their original value for other API requests or responses, without having to recode the API implementations?

Later, the project team requires all API specifications to be augmented with an additional non-functional requirement (NFR) to protect the backend services from a high rate of requests, according to defined service-level

agreements (SLAs). The NFR's SLAs are based on a new tiered subscription level "Gold", "Silver", or "Platinum" that must be tied to a new parameter that is being added to the Accounts object in their enterprise data model.

Following MuleSoft's recommended best practices, how should the project team now convey the necessary non-functional requirement to stakeholders?


A. Create and deploy API proxies in API Manager for the NFR, change the baseurl in each API specification to the corresponding API proxy implementation endpoint, and publish each modified API specification to Exchange


B. Update each API specification with comments about the NFR's SLAs and publish each modified API specification to Exchange


C. Update each API specification with a shared RAML fragment required to implement the NFR and publish the RAML fragment and each modified API specification to Exchange


D. Create a shared RAML fragment required to implement the NFR, list each API implementation endpoint in the RAML fragment, and publish the RAML fragment to Exchange





C.
  Update each API specification with a shared RAML fragment required to implement the NFR and publish the RAML fragment and each modified API specification to Exchange

Explanation:

To convey the necessary non-functional requirement (NFR) related to protecting backend services from a high rate of requests according to SLAs, the following steps should be taken:

Create a Shared RAML Fragment: Develop a RAML fragment that defines the NFR, including the SLAs for different subscription levels ("Gold", "Silver", "Platinum"). This fragment should include the details on rate limiting and throttling based on the new parameter added to the Accounts object.

Update API Specifications: Integrate the shared RAML fragment into each API specification. This ensures that the NFR is consistently applied across all relevant APIs.

Publish to Exchange: Publish the updated API specifications and the shared RAML fragment to Anypoint Exchange. This makes the NFR visible and accessible to all stakeholders and developers, ensuring compliance and implementation consistency.

This approach ensures that the NFR is clearly communicated and applied uniformly across all API implementations.

References

MuleSoft Documentation on RAML and API Specifications

Best Practices for API Design and Documentation

An insurance provider is implementing Anypoint platform to manage its application infrastructure and is using the customer hosted runtime for its business due to certain financial requirements it must meet. It has built a number of synchronous API's and is currently hosting these on a mule runtime on one server

These applications make use of a number of components including heavy use of object stores and VM queues.

Business has grown rapidly in the last year and the insurance provider is starting to receive reports of reliability issues from its applications.

The DevOps team indicates that the API's are currently handling too many requests and this is over loading the server. The team has also mentioned that there is a significant downtime when the server is down for maintenance.

As an integration architect, which option would you suggest to mitigate these issues?


A. Add a load balancer and add additional servers in a server group configuration


B. Add a load balancer and add additional servers in a cluster configuration


C. Increase physical specifications of server CPU memory and network


D. Change applications by use an event-driven model





B.
  Add a load balancer and add additional servers in a cluster configuration

Explanation:

To address the reliability and scalability issues faced by the insurance provider, adding a load balancer and configuring additional servers in a cluster configuration is the optimal solution. Here's why:

Load Balancing: Implementing a load balancer will help distribute incoming API requests evenly across multiple servers. This prevents any single server from becoming a bottleneck, thereby improving the overall performance and reliability of the system.

Cluster Configuration: By setting up a cluster configuration, you ensure that multiple servers work together as a single unit. This provides several benefits:

Maintenance: With a cluster configuration, servers can be taken offline for maintenance one at a time without affecting the overall availability of the applications, as the load balancer can redirect traffic to the remaining servers.

VM Queues and Object Stores: In a clustered environment, the use of VM queues and object stores can be more efficiently managed as these resources are distributed across multiple servers, reducing the risk of contention and improving throughput.

References:

MuleSoft Documentation on Clustering: https://docs.mulesoft.com/mule-runtime/4.3/clustering Best Practices for Scaling Mule Applications: https://blogs.mulesoft.com/dev/mule-dev/mule-4-scaling-applications/

A Mule application uses APIkit for SOAP to implement a SOAP web service. The Mule application has been deployed to a CloudHub worker in a testing environment. The integration testing team wants to use a SOAP client to perform Integration testing. To carry out the integration tests, the integration team must obtain the interface definition for the SOAP web service. What is the most idiomatic (used for its intended purpose) way for the integration testing team to obtain the interface definition for the deployed SOAP web service in order to perform integration testing with the SOAP client?


A. Retrieve the OpenAPI Specification file(s) from API Manager


B. Retrieve the WSDL file(s) from the deployed Mule application


C. Retrieve the RAML file(s) from the deployed Mule application


D. Retrieve the XML file(s) from Runtime Manager





D.
  Retrieve the XML file(s) from Runtime Manager

Explanation:
Reference: [Reference: https://docs.spring.io/spring-framework/docs/4.2.x/spring-framework-reference/html/integration-testing.html , , , ]

A manufacturing company is planning to deploy Mule applications to its own Azure Kubernetes Service infrastructure.

The organization wants to make the Mule applications more available and robust by deploying each Mule application to an isolated Mule runtime in a Docker container while managing all the Mule applications from the MuleSoft-hosted control plane.

What is the most idiomatic (used for its intended purpose) choice of runtime plane to meet these organizational requirements?


A. Anypoint Platform Private Cloud Edition


B. Anypoint Runtime Fabric


C. CloudHub


D. Anypoint Service Mesh





B.
  Anypoint Runtime Fabric

Explanation:

Reference: [Reference: https://blogs.mulesoft.com/dev-guides/how-to-tutorials/anypoint-runtime-fabric/, ]

Organization wants to achieve high availability goal for Mule applications in customer hosted runtime plane. Due to the complexity involved, data cannot be shared among of different instances of same Mule application. What option best suits to this requirement considering high availability is very much critical to the organization?


A. The cluster can be configured


B. Use third party product to implement load balancer


C. High availability can be achieved only in CloudHub


D. Use persistent object store





B.
  Use third party product to implement load balancer

Explanation

High availability is about up-time of your application

A) High availability can be achieved only in CloudHub isn't correct statement. It can be achieved in customer hosted runtime planes as well

B) An object store is a facility for storing objects in or across Mule applications. Mule runtime engine (Mule) uses object stores to persist data for eventual retrieval. It can be used for disaster recovery but not for High Availability. Using object store can't guarantee that all instances won't go down at once. So not an appropriate choice.

Reference: [Reference: https://docs.mulesoft.com/mule-runtime/4.3/mule-object-stores, C) High availability can be achieved by below two models for on-premise MuleSoft implementations., 1) Mule Clustering – Where multiple Mule servers are available within the same cluster environment and the routing of requests will be done by the load balancer. A cluster is a set of up to eight servers that act as a single deployment target and high-availability processing unit. Application instances in a cluster are aware of each other, share common information, and synchronize statuses. If one server fails, another server takes over processing applications. A cluster can run multiple applications. ( refer left half of the diagram), In given scenario, it's mentioned that 'data cannot be shared among of different instances'. So this is not a correct choice., Reference: https://docs.mulesoft.com/runtime-manager/cluster-about, 2) Load balanced standalone Mule instances – The high availability can be achieved even without cluster, with the usage of third party load balancer pointing requests to different Mule servers. This approach does not share or synchronize data between Mule runtimes. Also high availability achieved as load balanced algorithms can be implemented using external load balancer. ( refer right half of the diagram), Graphical user interface, diagram, application Description automatically generated, ]


Page 8 out of 54 Pages
Previous