Available in 1, 3, 6 and 12 Months Free Updates Plans
PDF: $15 $60

Test Engine: $20 $80

PDF + Engine: $25 $99

Professional-Cloud-DevOps-Engineer Practice Test


Page 4 out of 15 Pages

You support an application that stores product information in cached memory. For every
cache miss, an entry is logged in Stackdriver Logging. You want to visualize how often a
cache miss happens over time. What should you do?


A.

Link Stackdriver Logging as a source in Google Data Studio. Filler (he logs on the cache misses.


B.

Configure Stackdriver Profiler to identify and visualize when the cache misses occur based on the logs.


C.

Create a logs-based metric in Stackdriver Logging and a dashboard for that metric in Stackdriver Monitoring.


D.

Configure BigOuery as a sink for Stackdriver Logging. Create a scheduled query to filter
the cache miss logs and write them to a separate table





C.
  

Create a logs-based metric in Stackdriver Logging and a dashboard for that metric in Stackdriver Monitoring.



Explanation: https://cloud.google.com/logging/docs/logs-based-metrics#counter-metric

Your team is designing a new application for deployment into Google Kubernetes Engine (GKE). You need to set up monitoring to collect and aggregate various application-level metrics in a centralized location. You want to use Google Cloud Platform services while minimizing the amount of work required to set up monitoring. What should you do?


A.

Publish various metrics from the application directly to the Slackdriver Monitoring API,
and then observe these custom metrics in Stackdriver.


B.

Install the Cloud Pub/Sub client libraries, push various metrics from the application to
various topics, and then observe the aggregated metrics in Stackdriver.


C.

Install the OpenTelemetry client libraries in the application, configure Stackdriver as the
export destination for the metrics, and then observe the application's metrics in Stackdriver.


D.

Emit all metrics in the form of application-specific log messages, pass these messages
from the containers to the Stackdriver logging collector, and then observe metrics in
Stackdriver.





A.
  

Publish various metrics from the application directly to the Slackdriver Monitoring API,
and then observe these custom metrics in Stackdriver.



Explanation: https://cloud.google.com/kubernetes-engine/docs/concepts/custom-andexternal-
metrics#custom_metrics
https://github.com/GoogleCloudPlatform/k8s-stackdriver/blob/master/custom-metricsstackdriver-
adapter/README.md
Your application can report a custom metric to Cloud Monitoring. You can configure
Kubernetes to respond to these metrics and scale your workload automatically. For
example, you can scale your application based on metrics such as queries per second,
writes per second, network performance, latency when communicating with a different
application, or other metrics that make sense for your workload.
https://cloud.google.com/kubernetes-engine/docs/concepts/custom-and-external-metrics

You manage an application that is writing logs to Stackdriver Logging. You need to give some team members the ability to export logs. What should you do?


A.

Grant the team members the IAM role of logging.configWriter on Cloud IAM.


B.

Configure Access Context Manager to allow only these members to export logs.


C.

Create and grant a custom IAM role with the permissions logging.sinks.list and
logging.sink.get.


D.

Create an Organizational Policy in Cloud IAM to allow only these members to create log
exports.





A.
  

Grant the team members the IAM role of logging.configWriter on Cloud IAM.



You are running an application on Compute Engine and collecting logs through Stackdriver.
You discover that some personally identifiable information (Pll) is leaking into certain log
entry fields. All Pll entries begin with the text userinfo. You want to capture these log entries
in a secure location for later review and prevent them from leaking to Stackdriver Logging.
What should you do?


A.

Create a basic log filter matching userinfo, and then configure a log export in the Stackdriver console with Cloud Storage as a sink.


B.

Use a Fluentd filter plugin with the Stackdriver Agent to remove log entries containing userinfo, and then copy the entries to a Cloud Storage bucket.


C.

Create an advanced log filter matching userinfo, configure a log export in the Stackdriver console with Cloud Storage as a sink, and then configure a tog exclusion with userinfo as a
filter.


D.

Use a Fluentd filter plugin with the Stackdriver Agent to remove log entries containing
userinfo, create an advanced log filter matching userinfo, and then configure a log export in
the Stackdriver console with Cloud Storage as a sink.





B.
  

Use a Fluentd filter plugin with the Stackdriver Agent to remove log entries containing userinfo, and then copy the entries to a Cloud Storage bucket.



Explanation: https://medium.com/google-cloud/fluentd-filter-plugin-for-google-cloud-dataloss-
prevention-api-42bbb1308e76

You are working with a government agency that requires you to archive application logs for
seven years. You need to configure Stackdriver to export and store the logs while
minimizing costs of storage. What should you do?


A.

Create a Cloud Storage bucket and develop your application to send logs directly to the bucket.


B.

Develop an App Engine application that pulls the logs from Stackdriver and saves them
in BigQuery.


C.

Create an export in Stackdriver and configure Cloud Pub/Sub to store logs in permanent
storage for seven years.


D.

Create a sink in Stackdriver, name it, create a bucket on Cloud Storage for storing
archived logs, and then select the bucket as the log export destination.





D.
  

Create a sink in Stackdriver, name it, create a bucket on Cloud Storage for storing
archived logs, and then select the bucket as the log export destination.




Page 4 out of 15 Pages
Previous