Discount Offer
Available in 1, 3, 6 and 12 Months Free Updates Plans
PDF: $15 $60

Test Engine: $20 $80

PDF + Engine: $25 $99



Pass exam with Dumps4free or we will provide you with three additional months of access for FREE.

MuleSoft-Platform-Architect-I Practice Test

Whether you're a beginner or brushing up on skills, our MuleSoft-Platform-Architect-I practice exam is your key to success. Our comprehensive question bank covers all key topics, ensuring you’re fully prepared.


Page 11 out of 31 Pages

What is a typical result of using a fine-grained rather than a coarse-grained API deployment model to implement a given business process?


A. A decrease in the number of connections within the application network supporting the business process


B. A higher number of discoverable API-related assets in the application network


C. A better response time for the end user as a result of the APIs being smaller in scope and complexity


D. An overall tower usage of resources because each fine-grained API consumes less resources





B.
  A higher number of discoverable API-related assets in the application network

Explanation

Correct Answer: A higher number of discoverable API-related assets in the application network.

*****************************************

>> We do NOT get faster response times in fine-grained approach when compared to coarse-grained approach.

>> In fact, we get faster response times from a network having coarse-grained APIs compared to a network having fine-grained APIs model. The reasons are below.

Fine-grained approach:

1. will have more APIs compared to coarse-grained

2. So, more orchestration needs to be done to achieve a functionality in business process.

3. Which means, lots of API calls to be made. So, more connections will needs to be established. So, obviously more hops, more network i/o, more number of integration points compared to coarse-grained approach where fewer APIs with bulk functionality embedded in them.

4. That is why, because of all these extra hops and added latencies, fine-grained approach will have bit more response times compared to coarse-grained.

5. Not only added latencies and connections, there will be more resources used up in finegrained approach due to more number of APIs.

That's why, fine-grained APIs are good in a way to expose more number of resuable assets in your network and make them discoverable. However, needs more maintenance, taking care of integration points, connections, resources with a little compromise w.r.t network hops and response times.

A REST API is being designed to implement a Mule application. What standard interface definition language can be used to define REST APIs?


A. Web Service Definition Language(WSDL)


B. OpenAPI Specification (OAS)


C. YAML


D. AsyncAPI Specification





B.
  OpenAPI Specification (OAS)

A retail company is using an Order API to accept new orders. The Order API uses a JMS queue to submit orders to a backend order management service. The normal load for orders is being handled using two (2) CloudHub workers, each configured with 0.2 vCore. The CPU load of each CloudHub worker normally runs well below 70%. However, several times during the year the Order API gets four times (4x) the average number of orders. This causes the CloudHub worker CPU load to exceed 90% and the order submission time to exceed 30 seconds. The cause, however, is NOT the backend order management service, which still responds fast enough to meet the response SLA for the Order API. What is the MOST resource-efficient way to configure the Mule application's CloudHub deployment to help the company cope with this performance challenge?


A. Permanently increase the size of each of the two (2) CloudHub workers by at least four times (4x) to one (1) vCore


B. Use a vertical CloudHub autoscaling policy that triggers on CPU utilization greater than 70%


C. Permanently increase the number of CloudHub workers by four times (4x) to eight (8) CloudHub workers


D. Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater than 70%





D.
  Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater than 70%

Explanation

Correct Answer: Use a horizontal CloudHub autoscaling policy that triggers on CPU utilization greater than 70%

The scenario in the question is very clearly stating that the usual traffic in the year is pretty well handled by the existing worker configuration with CPU running well below 70%. The problem occurs only "sometimes" occasionally when there is spike in the number of orders coming in.

So, based on above, We neither need to permanently increase the size of each worker nor need to permanently increase the number of workers. This is unnecessary as other than those "occasional" times the resources are idle and wasted.

We have two options left now. Either to use horizontal Cloudhub autoscaling policy to automatically increase the number of workers or to use vertical Cloudhub autoscaling policy to automatically increase the vCore size of each worker.

Here, we need to take two things into consideration:

1. CPU

2. Order Submission Rate to JMS Queue

>> From CPU perspective, both the options (horizontal and vertical scaling) solves the issue. Both helps to bring down the usage below 90%.

>> However, If we go with Vertical Scaling, then from Order Submission Rate perspective, as the application is still being load balanced with two workers only, there may not be much improvement in the incoming request processing rate and order submission rate to JMS queue. The throughput would be same as before. Only CPU utilization comes down.

>> But, if we go with Horizontal Scaling, it will spawn new workers and adds extra hand to increase the throughput as more workers are being load balanced now. This way we can address both CPU and Order Submission rate.

Hence, Horizontal CloudHub Autoscaling policy is the right and best answer.

A company wants to move its Mule API implementations into production as quickly as possible. To protect access to all Mule application data and metadata, the company requires that all Mule applications be deployed to the company's customer-hosted infrastructure within the corporate firewall. What combination of runtime plane and control plane options meets these project lifecycle goals?


A. Manually provisioned customer-hosted runtime plane and customer-hosted control plane


B. MuleSoft-hosted runtime plane and customer-hosted control plane


C. Manually provisioned customer-hosted runtime plane and MuleSoft-hosted control plane


D. iPaaS provisioned customer-hosted runtime plane and MuleSoft-hosted control plane





A.
  Manually provisioned customer-hosted runtime plane and customer-hosted control plane

Explanation

Correct Answer: Manually provisioned customer-hosted runtime plane and customerhosted control plane

***************************************** There are two key factors that are to be taken into consideration from the scenario given in the question.

>> Company requires both data and metadata to be resided within the corporate firewall

>> Company would like to go with customer-hosted infrastructure.

Any deployment model that is to deal with the cloud directly or indirectly (Mulesoft-hosted or Customer's own cloud like Azure, AWS) will have to share atleast the metadata.

Application data can be controlled inside firewall by having Mule Runtimes on customer hosted runtime plane. But if we go with Mulsoft-hosted/ Cloud-based control plane, the control plane required atleast some minimum level of metadata to be sent outside the corporate firewall.

As the customer requirement is pretty clear about the data and metadata both to be within the corporate firewall, even though customer wants to move to production as quickly as possible, unfortunately due to the nature of their security requirements, they have no other option but to go with manually provisioned customer-hosted runtime plane and customerhosted control plane.

An API has been updated in Anypoint Exchange by its API producer from version 3.1.1 to 3.2.0 following accepted semantic versioning practices and the changes have been communicated via the API's public portal. The API endpoint does NOT change in the new version. How should the developer of an API client respond to this change?


A. The update should be identified as a project risk and full regression testing of the functionality that uses this API should be run


B. The API producer should be contacted to understand the change to existing functionality


C. The API producer should be requested to run the old version in parallel with the new one


D. The API client code ONLY needs to be changed if it needs to take advantage of new features





D.
  The API client code ONLY needs to be changed if it needs to take advantage of new features

Reference: https://docs.mulesoft.com/exchange/to-change-raml-version


Page 11 out of 31 Pages
Previous