To implement predictive maintenance on its machinery equipment, ACME Tractors has installed thousands of IoT sensors that will send data for each machinery asset as sequences of JMS messages, in near real-time, to a JMS queue named SENSOR_DATA on a JMS server. The Mule application contains a JMS Listener operation configured to receive incoming messages from the JMS servers SENSOR_DATA JMS queue. The Mule application persists each received JMS message, then sends a transformed version of the corresponding Mule event to the machinery equipment back-end systems.
The Mule application will be deployed to a multi-node, customer-hosted Mule runtime cluster. Under normal conditions, each JMS message should be processed exactly once.
How should the JMS Listener be configured to maximize performance and concurrent message processing of the JMS queue?
A. Set numberOfConsumers = 1
Set primaryNodeOnly = false
B. Set numberOfConsumers = 1
Set primaryNodeOnly = true
C. Set numberOfConsumers to a value greater than one
Set primaryNodeOnly = true
D. Set numberOfConsumers to a value greater than one
Set primaryNodeOnly = false
Explanation:
Reference: [Reference: https://docs.mulesoft.com/jms-connector/1.8/jms-performance, , , ]
An organization has implemented the cluster with two customer hosted Mule runtimes is hosting an application. This application has a flow with a JMS listener configured to consume messages from a queue destination. As an integration architect can you advise which JMS listener configuration must be used to receive messages in all the nodes of the cluster?
A. Use the parameter primaryNodeOnly= "false" on the JMS listener
B. Use the parameter primaryNodeOnly= "false" on the JMS listener with a shared subscription
C. Use the parameter primaryNodeOnly= "true" on the JMS listener with a nonshared subscription
D. Use the parameter primaryNodeOnly= "true" on the JMS listener
Explanation:
In a clustered Mule runtime environment, when using a JMS listener to consume messages from a queue destination, it is essential to ensure that messages are appropriately received by all nodes in the cluster. The configuration must support high availability and scalability. Here's why option B is correct:
primaryNodeOnly="false": Setting this parameter to "false" ensures that the JMS listener is active on all nodes in the cluster, not just the primary node. This setting allows multiple instances of the JMS listener to run concurrently across different nodes, enabling them to consume messages from the JMS queue.
Shared Subscription: Using a shared subscription means that all nodes will share the consumption of messages from the queue. This approach prevents duplicate message processing, as each message is delivered to only one listener instance within the cluster. This configuration ensures that message processing is balanced across the nodes, improving throughput and reliability.
To configure the JMS listener in Mule, the XML configuration might look something like this:
xml
This setup ensures that all nodes in the cluster are involved in message processing, leveraging the high availability and load balancing capabilities of the cluster.
References
MuleSoft Documentation on JMS Listener
MuleSoft Clustering Guide
A Mule application is synchronizing customer data between two different database systems. What is the main benefit of using XA transaction over local transactions to synchronize these two database system?
A. Reduce latency
B. Increase throughput
C. Simplifies communincation
D. Ensure consistency
Explanation
* XA transaction add tremendous latency so "Reduce Latency" is incorrect option XA transactions define "All or No" commit protocol.
* Each local XA resource manager supports the A.C.I.D properties (Atomicity, Consistency, Isolation, and Durability).
---------------------------------------------------------------------------------------------------------------------
So correct choice is "Ensure consistency"
Reference: [Reference: https://docs.mulesoft.com/mule-runtime/4.3/xa-transactions, ]
A mule application designed to fulfil two requirements a) Processing files are synchronously from an FTPS server to a back-end database using VM intermediary queues for load balancing VM events b) Processing a medium rate of records from a source to a target system using batch job scope Considering the processing reliability requirements for FTPS files, how should VM queues be configured for processing files as well as for the batch job scope if the application is deployed to Cloudhub workers?
A. Use Cloud hub persistent queues for FTPS files processing There is no need to configure VM queues for the batch jobs scope as it uses by default the worker's disc for VM queueing
B. Use Cloud hub persistent VM queue for FTPS file processing There is no need to configure VM queues for the batch jobs scope as it uses by default the worker's JVM memory for VM queueing
C. Use Cloud hub persistent VM queues for FTPS file processing Disable VM queue for the batch job scope
D. Use VM connector persistent queues for FTPS file processing Disable VM queue for the batch job scope
Explanation:
When processing files synchronously from an FTPS server to a back-end database using VM intermediary queues for load balancing VM events on CloudHub, reliability is critical. CloudHub persistent queues should be used for FTPS file processing to ensure that no data is lost in case of worker failure or restarts. These queues provide durability and reliability since they store messages persistently.
For the batch job scope, it is not necessary to configure additional VM queues. By default, batch jobs on CloudHub use the worker's disk for VM queueing, which is reliable for handling medium-rate records processing from a source to a target system. This approach ensures that both FTPS file processing and batch job processing meet reliability requirements without additional configuration for batch job scope.
References
MuleSoft Documentation on CloudHub and VM Queues
Anypoint Platform Best Practices
A payment processing company has implemented a Payment Processing API Mule application to process credit card and debit card transactions, Because the Payment Processing API handles highly sensitive information, the payment processing company requires that data must be encrypted both In-transit and at-rest.
To meet these security requirements, consumers of the Payment Processing API must create request message payloads in a JSON format specified by the API, and the message payload values must be encrypted.
How can the Payment Processing API validate requests received from API consumers?
A. A Transport Layer Security (TLS) - Inbound policy can be applied in API Manager to decrypt the message payload and the Mule application implementation can then use the JSON Validation module to validate the JSON data
B. The Mule application implementation can use the APIkit module to decrypt and then validate the JSON data
C. The Mule application implementation can use the Validation module to decrypt and then validate the JSON data
D. The Mule application implementation can use DataWeave to decrypt the message payload and then use the JSON Scheme Validation module to validate the JSON data
Explanation:
To ensure that data is encrypted both in-transit and at-rest, and to validate incoming requests to the Payment Processing API, the following approach is recommended:
TLS Inbound Policy: Apply a Transport Layer Security (TLS) - Inbound policy in API Manager. This policy ensures that the data is encrypted during transmission and can be decrypted by the API Manager before it reaches the Mule application.
Decryption: With the TLS policy applied, the message payload is decrypted when it is received by the API Manager.
JSON Validation: After decryption, the Mule application can use the JSON Validation module to validate the structure and content of the JSON data. This ensures that the payload conforms to the specified format and contains valid data.
This approach ensures that data is securely transmitted and properly validated upon receipt.
References:
Transport Layer Security (TLS) Policies
JSON Validation Module
Page 19 out of 54 Pages |
Previous |