Discount Offer
Available in 1, 3, 6 and 12 Months Free Updates Plans
PDF: $15 $60

Test Engine: $20 $80

PDF + Engine: $25 $99



Pass exam with Dumps4free or we will provide you with three additional months of access for FREE.

Professional-Cloud-Developer Practice Test


Page 5 out of 51 Pages

Topic 2: Misc. Questions

You are a lead developer working on a new retail system that runs on Cloud Run and Firestore. A web UI requirement is for the user to be able to browse through alt products. A few months after go-live, you notice that Cloud Run instances are terminated with HTTP 500: Container instances are exceeding memory limits errors during busy times.
This error coincides with spikes in the number of Firestore queries
You need to prevent Cloud Run from crashing and decrease the number of Firestore queries. You want to use a solution that optimizes system performance What should you do?


A. Create a custom index over the products


B. Modify the query that returns the product list using cursors with limits


C. Modify the Cloud Run configuration to increase the memory limits


D. Modify the query that returns the product list using integer offsets





B.
  Modify the query that returns the product list using cursors with limits

You are running an application on App Engine that you inherited. You want to find out whether the application is using insecure binaries or is vulnerable to XSS attacks. Which service should you use?


A. Cloud Amor


B. Stackdriver Debugger


C. Cloud Security Scanner


D. Stackdriver Error Reporting





C.
  Cloud Security Scanner

Your team is building an application for a financial institution. The application's frontend runs on Compute Engine, and the data resides in Cloud SQL and one Cloud Storage bucket. The application will collect data containing PII, which will be stored in the Cloud SQL database and the Cloud Storage bucket. You need to secure the PII data. What should you do?


A. 1) Create the relevant firewall rules to allow only the frontend to communicate with the Cloud SQL database
2) Using IAM, allow only the frontend service account to access the Cloud Storage bucket


B. 1) Create the relevant firewall rules to allow only the frontend to communicate with the Cloud SQL database
2) Enable private access to allow the frontend to access the Cloud Storage bucket privately


C. 1) Configure a private IP address for Cloud SQL
2) Use VPC-SC to create a service perimeter
3) Add the Cloud SQL database and the Cloud Storage bucket to the same service perimeter


D. 1) Configure a private IP address for Cloud SQL
2) Use VPC-SC to create a service perimeter
3) Add the Cloud SQL database and the Cloud Storage bucket to different service perimeters





C.
  1) Configure a private IP address for Cloud SQL
2) Use VPC-SC to create a service perimeter
3) Add the Cloud SQL database and the Cloud Storage bucket to the same service perimeter

You are responsible for deploying a new API. That API will have three different URL paths:
• https://yourcompany.com/students
• https://yourcompany.com/teachers
• https://yourcompany.com/classes
You need to configure each API URL path to invoke a different function in your code. What should you do?


A. Create one Cloud Function as a backend service exposed using an HTTPS load balancer.


B. Create three Cloud Functions exposed directly


C. Create one Cloud Function exposed directly


D. Create three Cloud Functions as three backend services exposed using an HTTPS load balancer.





D.
  Create three Cloud Functions as three backend services exposed using an HTTPS load balancer.

Your team develops services that run on Google Cloud. You want to process messages sent to a Pub/Sub topic, and then store them. Each message must be processed exactly once to avoid duplication of data and any data conflicts. You need to use the cheapest and most simple solution. What should you do?


A. Process the messages with a Dataproc job, and write the output to storage.


B. Process the messages with a Dataflow streaming pipeline using Apache Beam's PubSubIO package, and write the output to storage.


C. Process the messages with a Cloud Function, and write the results to a BigQuery location where you can run a job to deduplicate the data.


D. Retrieve the messages with a Dataflow streaming pipeline, store them in Cloud Bigtable, and use another Dataflow streaming pipeline to deduplicate messages.





B.
  Process the messages with a Dataflow streaming pipeline using Apache Beam's PubSubIO package, and write the output to storage.


Page 5 out of 51 Pages
Previous