Available in 1, 3, 6 and 12 Months Free Updates Plans
PDF: $15 $60

Test Engine: $20 $80

PDF + Engine: $25 $99

MuleSoft-Platform-Architect-I Practice Test


Page 3 out of 20 Pages

An organization has several APIs that accept JSON data over HTTP POST. The APIs are all publicly available and are associated with several mobile applications and web applications. The organization does NOT want to use any authentication or compliance policies for these APIs, but at the same time, is worried that some bad actor could send payloads that could somehow compromise the applications or servers running the API implementations. What out-of-the-box Anypoint Platform policy can address exposure to this threat?


A. Shut out bad actors by using HTTPS mutual authentication for all API invocations


B. Apply an IP blacklist policy to all APIs; the blacklist will Include all bad actors


C. Apply a Header injection and removal policy that detects the malicious data before it is used


D. Apply a JSON threat protection policy to all APIs to detect potential threat vectors





D.
  Apply a JSON threat protection policy to all APIs to detect potential threat vectors

Explanation

Correct Answer: Apply a JSON threat protection policy to all APIs to detect potential threat vectors

*****************************************

Usually, if the APIs are designed and developed for specific consumers (known consumers/customers) then we would IP Whitelist the same to ensure that traffic only comes from them.

However, as this scenario states that the APIs are publicly available and being used by so many mobile and web applications, it is NOT possible to identify and blacklist all possible bad actors.

So, JSON threat protection policy is the best chance to prevent any bad JSON payloads from such bad actors.

A set of tests must be performed prior to deploying API implementations to a staging environment. Due to data security and access restrictions, untested APIs cannot be granted access to the backend systems, so instead mocked data must be used for these tests. The amount of available mocked data and its contents is sufficient to entirely test the API implementations with no active connections to the backend systems. What type of tests should be used to incorporate this mocked data?


A. Integration tests


B. Performance tests


C. Functional tests (Blackbox)


D. Unit tests (Whitebox)





D.
  Unit tests (Whitebox)

Explanation

Correct Answer: Unit tests (Whitebox)

*****************************************

Reference: [Reference: https://docs.mulesoft.com/mule-runtime/3.9/testing-strategies, As per general IT testing practice and MuleSoft recommended practice, Integration and Performance tests should be done on full end to end setup for right evaluation. Which means all end systems should be connected while doing the tests. So, these options are OUT and we are left with Unit Tests and Functional Tests., As per attached reference documentation from MuleSoft:, Unit Tests - are limited to the code that can be realistically exercised without the need to run it inside Mule itself. So good candidates are Small pieces of modular code, Sub Flows, Custom transformers, Custom components, Custom expression evaluators etc., Functional Tests - are those that most extensively exercise your application configuration. In these tests, you have the freedom and tools for simulating happy and unhappy paths. You also have the possibility to create stubs for target services and make them success or fail to easily simulate happy and unhappy paths respectively., As the scenario in the question demands for API implementation to be tested before deployment to Staging and also clearly indicates that there is enough/ sufficient amount of mock data to test the various components of API implementations with no active connections to the backend systems, Unit Tests are the one to be used to incorporate this mocked data., ]

A company has started to create an application network and is now planning to implement a Center for Enablement (C4E) organizational model. What key factor would lead the company to decide upon a federated rather than a centralized C4E?


A. When there are a large number of existing common assets shared by development teams


B. When various teams responsible for creating APIs are new to integration and hence need extensive training


C. When development is already organized into several independent initiatives or groups


D. When the majority of the applications in the application network are cloud based





C.
  When development is already organized into several independent initiatives or groups

Explanation

Correct Answer: When development is already organized into several independent initiatives or groups

***************************************** It would require lot of process effort in an organization to have a single C4E team coordinating with multiple already organized development teams which are into several independent initiatives. A single C4E works well with different teams having at least a common initiative. So, in this scenario, federated C4E works well instead of centralized C4E.

What is a best practice when building System APIs?


A. Document the API using an easily consumable asset like a RAML definition


B. Model all API resources and methods to closely mimic the operations of the backend system


C. Build an Enterprise Data Model (Canonical Data Model) for each backend system and apply it to System APIs


D. Expose to API clients all technical details of the API implementation's interaction wifch the backend system





B.
  Model all API resources and methods to closely mimic the operations of the backend system

Explanation

Correct Answer: Model all API resources and methods to closely mimic the operations of the backend system.

*****************************************

There are NO fixed and straight best practices while opting data models for APIs. They are completly contextual and depends on number of factors. Based upon those factors, an enterprise can choose if they have to go with Enterprise Canonical Data Model or Bounded Context Model etc.

One should NEVER expose the technical details of API implementation to their API clients. Only the API interface/ RAML is exposed to API clients.

It is true that the RAML definitions of APIs should be as detailed as possible and should reflect most of the documentation. However, just that is NOT enough to call your API as best documented API. There should be even more documentation on Anypoint Exchange with API Notebooks etc. to make and create a developer friendly API and repository..

The best practice always when creating System APIs is to create their API interfaces by modeling their resources and methods to closely reflect the operations and functionalities of that backend system.

A company wants to move its Mule API implementations into production as quickly as possible. To protect access to all Mule application data and metadata, the company requires that all Mule applications be deployed to the company's customer-hosted infrastructure within the corporate firewall. What combination of runtime plane and control plane options meets these project lifecycle goals?


A. Manually provisioned customer-hosted runtime plane and customer-hosted control plane


B. MuleSoft-hosted runtime plane and customer-hosted control plane


C. Manually provisioned customer-hosted runtime plane and MuleSoft-hosted control plane


D. iPaaS provisioned customer-hosted runtime plane and MuleSoft-hosted control plane





A.
  Manually provisioned customer-hosted runtime plane and customer-hosted control plane

Explanation

Correct Answer: Manually provisioned customer-hosted runtime plane and customer-hosted control plane

*****************************************

There are two key factors that are to be taken into consideration from the scenario given in the question.

Company requires both data and metadata to be resided within the corporate firewall

Company would like to go with customer-hosted infrastructure.

Any deployment model that is to deal with the cloud directly or indirectly (Mulesoft-hosted or Customer's own cloud like Azure, AWS) will have to share atleast the metadata.

Application data can be controlled inside firewall by having Mule Runtimes on customer hosted runtime plane. But if we go with Mulsoft-hosted/ Cloud-based control plane, the control plane required atleast some minimum level of metadata to be sent outside the corporate firewall. As the customer requirement is pretty clear about the data and metadata both to be within the corporate firewall, even though customer wants to move to production as quickly as possible, unfortunately due to the nature of their security requirements, they have no other option but to go with manually provisioned customer-hosted runtime plane and customer-hosted control plane.


Page 3 out of 20 Pages
Previous