An organization is deploying their new implementation of the OrderStatus System API to multiple workers in CloudHub. This API fronts the organization's on-premises Order Management System, which is accessed by the API implementation over an IPsec tunnel. What type of error typically does NOT result in a service outage of the OrderStatus System API?
A. A CloudHub worker fails with an out-of-memory exception
B. API Manager has an extended outage during the initial deployment of the API implementation
C. The AWS region goes offline with a major network failure to the relevant AWS data centers
D. The Order Management System is Inaccessible due to a network outage in the organization's on-premises data center
Explanation
Correct Answer: A CloudHub worker fails with an out-of-memory exception.
*****************************************
>> An AWS Region itself going down will definitely result in an outage as it does not matter
how many workers are assigned to the Mule App as all of those in that region will go down.
This is a complete downtime and outage.
>> Extended outage of API manager during initial deployment of API implementation will of
course cause issues in proper application startup itself as the API Autodiscovery might fail
or API policy templates and polices may not be downloaded to embed at the time of
applicaiton startup etc... there are many reasons that could cause issues.
>> A network outage onpremises would of course cause the Order Management System
not accessible and it does not matter how many workers are assigned to the app they all
will fail and cause outage for sure.
The only option that does NOT result in a service outage is if a cloudhub worker fails with
an out-of-memory exception. Even if a worker fails and goes down, there are still other
workers to handle the requests and keep the API UP and Running. So, this is the right
answer.
A system API has a guaranteed SLA of 100 ms per request. The system API is deployed to a primary environment as well as to a disaster recovery (DR) environment, with different DNS names in each environment. An upstream process API invokes the system API and the main goal of this process API is to respond to client requests in the least possible time. In what order should the system APIs be invoked, and what changes should be made in order to speed up the response time for requests from the process API?
A. In parallel, invoke the system API deployed to the primary environment and the system API deployed to the DR environment, and ONLY use the first response
B. In parallel, invoke the system API deployed to the primary environment and the system API deployed to the DR environment using a scatter-gather configured with a timeout, and then merge the responses
C. Invoke the system API deployed to the primary environment, and if it fails, invoke the system API deployed to the DR environment
D. Invoke ONLY the system API deployed to the primary environment, and add timeout and retry logic to avoid intermittent failures
Explanation
Correct Answer: In parallel, invoke the system API deployed to the primary environment
and the system API deployed to the DR environment, and ONLY use the first response.
*****************************************
>> The API requirement in the given scenario is to respond in least possible time.
>> The option that is suggesting to first try the API in primary environment and then
fallback to API in DR environment would result in successful response but NOT in least
possible time. So, this is NOT a right choice of implementation for given requirement.
>> Another option that is suggesting to ONLY invoke API in primary environment and to
add timeout and retries may also result in successful response upon retries but NOT in
least possible time. So, this is also NOT a right choice of implementation for given
requirement.
>> One more option that is suggesting to invoke API in primary environment and API in DR
environment in parallel using Scatter-Gather would result in wrong API response as it
would return merged results and moreover, Scatter-Gather does things in parallel which is
true but still completes its scope only on finishing all routes inside it. So again, NOT a right
choice of implementation for given requirement
The Correct choice is to invoke the API in primary environment and the API in DR
environment parallelly, and using ONLY the first response received from one of them.
An Anypoint Platform organization has been configured with an external identity provider (IdP) for identity management and client management. What credentials or token must be provided to Anypoint CLI to execute commands against the Anypoint Platform APIs?
A. The credentials provided by the IdP for identity management
B. The credentials provided by the IdP for client management
C. An OAuth 2.0 token generated using the credentials provided by the IdP for client management
D. An OAuth 2.0 token generated using the credentials provided by the IdP for identity management
Explanation
Correct Answer: The credentials provided by the IdP for identity management
*****************************************
Reference: https://docs.mulesoft.com/runtime-manager/anypoint-platformcli#authentication
>> There is no support for OAuth 2.0 tokens from client/identity providers to authenticate
via Anypoint CLI. Only possible tokens are "bearer tokens" that too only generated using
Anypoint Organization/Environment Client Id and Secret from
https://anypoint.mulesoft.com/accounts/login.
Not the client credentials of client provider.
So, OAuth 2.0 is not possible. More over, the token is mainly for API Manager purposes
and not associated with a user. You can NOT use it to call most APIs (for example
Cloudhub and etc) as per this Mulesoft Knowledge article.
>> The other option allowed by Anypoint CLI is to use client credentials. It is possible to
use client credentials of a client provider but requires setting up Connected Apps in client
management but such details are not given in the scenario explained in the question.
>> So only option left is to use user credentials from identify provider
A company has started to create an application network and is now planning to implement a Center for Enablement (C4E) organizational model. What key factor would lead the company to decide upon a federated rather than a centralized C4E?
A. When there are a large number of existing common assets shared by development teams
B. When various teams responsible for creating APIs are new to integration and hence need extensive training
C. When development is already organized into several independent initiatives or groups
D. When the majority of the applications in the application network are cloud based
Explanation
Correct Answer: When development is already organized into several independent
initiatives or groups
*****************************************
>> It would require lot of process effort in an organization to have a single C4E team
coordinating with multiple already organized development teams which are into several
independent initiatives. A single C4E works well with different teams having at least a
common initiative. So, in this scenario, federated C4E works well instead of centralized
C4E.
Due to a limitation in the backend system, a system API can only handle up to 500 requests per second. What is the best type of API policy to apply to the system API to avoid overloading the backend system?
A. Rate limiting
B. HTTP caching
C. Rate limiting - SLA based
D. Spike control
Explanation
Correct Answer: Spike control
*****************************************
>> First things first, HTTP Caching policy is for purposes different than avoiding the
backend system from overloading. So this is OUT.
>> Rate Limiting and Throttling/ Spike Control policies are designed to limit API access, but
have different intentions.
>> Rate limiting protects an API by applying a hard limit on its access.
>> Throttling/ Spike Control shapes API access by smoothing spikes in traffic.
That is why, Spike Control is the right option.
Page 10 out of 31 Pages |
Previous |