Win IT Exam with Last Dumps 2025


Google Professional-Cloud-Devops Exam

Page 12/21
Viewing Questions 111 120 out of 201 Questions
57.14%

Question 111
You are configuring connectivity across Google Kubernetes Engine (GKE) clusters in different VPCs. You notice that the nodes in Cluster A are unable to access the nodes in Cluster B. You suspect that the workload access issue is due to the network configuration. You need to troubleshoot the issue but do not have execute access to workloads and nodes. You want to identify the layer at which the network connectivity is broken. What should you do?
A. Install a toolbox container on the node in Cluster Confirm that the routes to Cluster B are configured appropriately.
B. Use Network Connectivity Center to perform a Connectivity Test from Cluster A to Cluster B.
C. Use a debug container to run the traceroute command from Cluster A to Cluster B and from Cluster B to Cluster A. Identify the common failure point.
D. Enable VPC Flow Logs in both VPCs, and monitor packet drops.

Question 112
You manage an application that runs in Google Kubernetes Engine (GKE) and uses the blue/green deployment methodology. Extracts of the Kubernetes manifests are shown below:
Professional-Cloud-Devops_112Q.png related to the google Professional-Cloud-Devops Exam
The Deployment app-green was updated to use the new version of the application. During post-deployment monitoring, you notice that the majority of user requests are failing. You did not observe this behavior in the testing environment. You need to mitigate the incident impact on users and enable the developers to troubleshoot the issue. What should you do?
A. Update the Deployment app-blue to use the new version of the application.
B. Update the Deployment app-green to use the previous version of the application.
C. Change the selector on the Service app-svc to app: my-app.
D. Change the selector on the Service app-svc to app: my-app, version: blue.

Question 113
You are running a web application deployed to a Compute Engine managed instance group. Ops Agent is installed on all instances. You recently noticed suspicious activity from a specific IP address. You need to configure Cloud Monitoring to view the number of requests from that specific IP address with minimal operational overhead. What should you do?
A. Configure the Ops Agent with a logging receiver. Create a logs-based metric.
B Create a script to scrape the web server log. Export the IP address request metrics to the Cloud Monitoring API.
C. Update the application to export the IP address request metrics to the Cloud Monitoring API.
D. Configure the Ops Agent with a metrics receiver.

Question 114
Your organization is using Helm to package containerized applications. Your applications reference both public and private charts. Your security team flagged that using a public Helm repository as a dependency is a risk. You want to manage all charts uniformly, with native access control and VPC Service Controls. What should you do?
A. Store public and private charts in OCI format by using Artifact Registry.
B. Store public and private charts by using GitHub Enterprise with Google Workspace as the identity provider.
C. Store public and private charts by using Git repository. Configure Cloud Build to synchronize contents of the repository into a Cloud Storage bucket. Connect Helm to the bucket by using https://[bucket].storage-googleapis.com/[helmchart] as the Helm repository.
D. Configure a Helm chart repository server to run in Google Kubernetes Engine (GKE) with Cloud Storage bucket as the storage backend.

Question 115
You use Terraform to manage an application deployed to a Google Cloud environment. The application runs on instances deployed by a managed instance group. The Terraform code is deployed by using a CI/CD pipeline. When you change the machine type on the instance template used by the managed instance group, the pipeline fails at the terraform apply stage with the following error message:
Professional-Cloud-Devops_115Q.png related to the google Professional-Cloud-Devops Exam
You need to update the instance template and minimize disruption to the application and the number of pipeline runs.
What should you do?
A. Delete the managed instance group, and recreate it after updating the instance template.
B. Add a new instance template, update the managed instance group to use the new instance template, and delete the old instance template.
C. Remove the managed instance group from the Terraform state file, update the instance template, and reimport the managed instance group.
D. Set the create_before_destroy meta-argument to true in the lifecycle block on the instance template.


Question 116
Your company operates in a highly regulated domain that requires you to store all organization logs for seven years. You want to minimize logging infrastructure complexity by using managed services. You need to avoid any future loss of log capture or stored logs due to misconfiguration or human error. What should you do?
A. Use Cloud Logging to configure an aggregated sink at the organization level to export all logs into a BigQuery dataset.
B. Use Cloud Logging to configure an aggregated sink at the organization level to export all logs into Cloud Storage with a seven-year retention policy and Bucket Lock.
C. Use Cloud Logging to configure an export sink at each project level to export all logs into a BigQuery dataset
D. Use Cloud Logging to configure an export sink at each project level to export all logs into Cloud Storage with a seven-year retention policy and Bucket Lock.

Question 117
You are building the CI/CD pipeline for an application deployed to Google Kubernetes Engine (GKE). The application is deployed by using a Kubernetes Deployment, Service, and Ingress. The application team asked you to deploy the application by using the blue/green deployment methodology. You need to implement the rollback actions. What should you do?
A. Run the kubectl rollout undo command.
B. Delete the new container image, and delete the running Pods.
C. Update the Kubernetes Service to point to the previous Kubernetes Deployment.
D. Scale the new Kubernetes Deployment to zero.

Question 118
You are building and running client applications in Cloud Run and Cloud Functions. Your client requires that all logs must be available for one year so that the client can import the logs into their logging service. You must minimize required code changes. What should you do?
A. Update all images in Cloud Run and all functions in Cloud Functions to send logs to both Cloud Logging and the client's logging service. Ensure that all the ports required to send logs are open in the VPC firewall.
B. Create a Pub/Sub topic, subscription, and logging sink. Configure the logging sink to send all logs into the topic. Give your client access to the topic to retrieve the logs.
C. Create a storage bucket and appropriate VPC firewall rules. Update all images in Cloud Run and all functions in Cloud Functions to send logs to a file within the storage bucket.
D. Create a logs bucket and logging sink. Set the retention on the logs bucket to 365 days. Configure the logging sink to send logs to the bucket. Give your client access to the bucket to retrieve the logs.

Question 119
You are building and running client applications in Cloud Run and Cloud Functions. Your client requires that all logs must be available for one year so that the client can import the logs into their logging service. You must minimize required code changes. What should you do?
A. Deploy Falco or Twistlock on GKE to monitor for vulnerabilities on your running Pods.
B. Configure Identity and Access Management (IAM) policies to create a least privilege model on your GKE clusters.
C. Use Binary Authorization to attest images during your CI/CD pipeline.
D. Enable Container Analysis in Artifact Registry, and check for common vulnerabilities and exposures (CVEs) in your container images.

Question 120
You have an application that runs in Google Kubernetes Engine (GKE). The application consists of several microservices that are deployed to GKE by using Deployments and Services. One of the microservices is experiencing an issue where a Pod returns 403 errors after the Pod has been running for more than five hours. Your development team is working on a solution, but the issue will not be resolved for a month. You need to ensure continued operations until the microservice is fixed. You want to follow Google-recommended practices and use the fewest number of steps. What should you do?
A. Create a cron job to terminate any Pods that have been running for more than five hours.
B. Add a HTTP liveness probe to the microservice's deployment.
C. Monitor the Pods, and terminate any Pods that have been running for more than five hours.
D. Configure an alert to notify you whenever a Pod returns 403 errors.



Premium Version