You need to create a GKE cluster in an existing VPC that is accessible from on-premises.
You must meet the following requirements:
IP ranges for pods and services must be as small as possible.
The nodes and the master must not be reachable from the internet.
You must be able to use kubectl commands from on-premises subnets to manage
the cluster.
How should you create the GKE cluster?
A.
• Create a private cluster that uses VPC advanced routes.
•Set the pod and service ranges as /24.
•Set up a network proxy to access the master.
B.
• Create a VPC-native GKE cluster using GKE-managed IP ranges.
•Set the pod IP range as /21 and service IP range as /24.
•Set up a network proxy to access the master.
C.
• Create a VPC-native GKE cluster using user-managed IP ranges.
•Enable a GKE cluster network policy, set the pod and service ranges as /24.
•Set up a network proxy to access the master.
•Enable master authorized networks.
D.
• Create a VPC-native GKE cluster using user-managed IP ranges.
•Enable privateEndpoint on the cluster master.
•Set the pod and service ranges as /24.
•Set up a network proxy to access the master.
•Enable master authorized networks.
• Create a VPC-native GKE cluster using user-managed IP ranges.
•Enable privateEndpoint on the cluster master.
•Set the pod and service ranges as /24.
•Set up a network proxy to access the master.
•Enable master authorized networks.
Creating GKE private clusters with network proxies for controller access When you create a
GKE private cluster with a private cluster controller endpoint, the cluster's controller node is
inaccessible from the public internet, but it needs to be accessible for administration. By
default, clusters can access the controller through its private endpoint, and authorized
networks can be defined within the VPC network. To access the controller from onpremises
or another VPC network, however, requires additional steps. This is because the
VPC network that hosts the controller is owned by Google and cannot be accessed from
resources connected through another VPC network peering connection, Cloud VPN or
Cloud Interconnect. https://cloud.google.com/solutions/creating-kubernetes-engine-privateclusters-
with-net-proxies
You have configured a service on Google Cloud that connects to an on-premises service
via a Dedicated Interconnect. Users are reporting recent connectivity issues. You need to
determine whether the traffic is being dropped because of firewall rules or a routing
decision. What should you do?
A.
Use the Network Intelligence Center Connectivity Tests to test the connectivity between
the VPC and the on-premises network.
B.
Use Network Intelligence Center Network Topology to check the traffic flow, and replay
the traffic from the time period when the connectivity issue occurred
C.
Configure VPC Flow Logs. Review the logs by filtering on the source and destination.
D.
Configure a Compute Engine instance on the same VPC as the service running on
Google Cloud to run a traceroute targeted at the on-premises service.
Use Network Intelligence Center Network Topology to check the traffic flow, and replay
the traffic from the time period when the connectivity issue occurred
You built a web application with several containerized microservices. You want to run those
microservices on Cloud Run. You must also ensure that the services are highly available to
your customers with low latency. What should you do?
A.
Deploy the Cloud Run services to multiple availability zones. Create a global TCP load
balancer. Add the Cloud Run endpoints to its backend service.
B.
Deploy the Cloud Run services to multiple regions. Create serverless network endpoint
groups (NEGs) that point to the services. Create a global HTTPS load balancer, and attach
the serverless NEGs as backend services of the load balancer.
C.
Deploy the Cloud Run services to multiple availability zones. Create Cloud Endpoints
that point to the services. Create a global HTTPS load balancer, and attach the Cloud
Endpoints to its backend.
D.
Deploy the Cloud Run services to multiple regions. Configure a round-robin A record in
Cloud DNS.
Deploy the Cloud Run services to multiple regions. Create serverless network endpoint
groups (NEGs) that point to the services. Create a global HTTPS load balancer, and attach
the serverless NEGs as backend services of the load balancer.
You converted an auto mode VPC network to custom mode. Since the conversion, some of
your Cloud Deployment Manager templates are no longer working. You want to resolve the problem.
What should you do?
A.
Apply an additional IAM role to the Google API’s service account to allow custom mode
networks.
B.
Update the VPC firewall to allow the Cloud Deployment Manager to access the custom
mode networks.
C.
Explicitly reference the custom mode networks in the Cloud Armor whitelist.
D.
Explicitly reference the custom mode networks in the Deployment Manager templates
Explicitly reference the custom mode networks in the Deployment Manager templates
Your company has a single Virtual Private Cloud (VPC) network deployed in Google Cloud
with access from on-premises locations using Cloud Interconnect connections. Your
company must be able to send traffic to Cloud Storage only through the Interconnect links
while accessing other Google APIs and services over the public internet. What should you
do?
A.
Use the default public domains for all Google APIs and services.
B.
Use Private Service Connect to access Cloud Storage, and use the default public
domains for all other Google APIs and services.
C.
Use Private Google Access, with restricted.googleapis.com virtual IP addresses for
Cloud Storage and private.googleapis.com for all other Google APIs and services.
D.
Use Private Google Access, with private.googleapis.com virtual IP addresses for Cloud
Storage and restricted.googleapis.com virtual IP addresses for all other Google APIs and
services.
Use Private Service Connect to access Cloud Storage, and use the default public
domains for all other Google APIs and services.
Page 5 out of 31 Pages |
Previous |