Win IT Exam with Last Dumps 2025


Google Professional-Cloud-Devops Exam

Page 19/21
Viewing Questions 181 190 out of 201 Questions
90.48%

Question 181
You manage a critical API running on Cloud Run that serves an average of 10,000 requests per minute. You need to define service level objectives (SLOs) for availability and latency to ensure that the API meets user expectations, which include 99.9% availability and a maximum latency of 200 milliseconds for 95% of requests. You also need to ensure these SLOs are actively monitored and measured. What should you do?
A. Configure Cloud Monitoring to send alerts when average API latency exceeds 150 ms or the error rate surpasses 0.1%.
B. Prioritize latency as the only SLO, targeting 100 ms for 99% of requests.
C. Set SLOs for 99% availability at 99% and 500 ms latency for 90% of requests. Use Cloud Monitoring to track SLOs and alert on violations.
D. Set SLOs for the API by using availability and latency service level indicators. Use Cloud Monitoring to track SLOs and alert on violations.

Question 182
You are running a web application that connects to an AlloyDB cluster by using a private IP address in your default VPC. You need to run a database schema migration in your CI/CD pipeline by using Cloud Build before deploying a new version of your application. You want to follow Google-recommended security practices. What should you do?
A. Set up a Cloud Build private pool to access the database through a static external IP address. Configure the database to only allow connections from this IP address. Execute the schema migration script in the private pool.
B. Create a service account that has permission to access the database. Configure Cloud Build to use this service account and execute the schema migration script in a private pool.
C. Add the database username and password to Secret Manager. When running the schema migration script, retrieve the username and password from Secret Manager.
D. Add the database username and encrypted password to the application configuration file. Use these credentials in Cloud Build to execute the schema migration script.

Question 183
You use Artifact Registry to store container images built with Cloud Build. You need to ensure that all existing and new images are continuously scanned for vulnerabilities. You also want to track who pushed each image to the registry. What should you do?
A. Configure Artifact Registry to automatically scan new images and periodically re-scan all images. Use Cloud Audit Logs to track image uploads and identify the user who pushed each image.
B. Configure Artifact Registry to send vulnerability scan results to a Cloud Storage bucket. Use a separate script to parse results and notify a security team.
C. Configure Artifact Registry to automatically re-scan images daily. Enable Cloud Audit Logs to track these scans, and use Logs Explorer to identify vulnerabilities.
D. Configure Artifact Registry to automatically trigger vulnerability scans for new image tags, and view scan results. Use Cloud Audit Logs to track image tag creation events.

Question 184
You manage a retail website for your company. The website consists of several microservices running in a GKE Standard node pool with node autoscaling enabled. Each microservice has resource limits and a Horizontal Pod Autoscaler configured. During a busy period, you receive alerts for one of the microservices. When you check the Pods, half of them have the status OOMKilled, and the number of Pods is at the minimum autoscaling limit. You need to resolve the issue. What should you do?
A. Update the node pool to use a machine type with more memory.
B. Increase the maximum number of nodes in the node pool.
C. Increase the maximum replica limit of the Horizontal Pod Autoscaler.
D. Increase the memory resource limit of the microservice.

Question 185
You are configuring a Cl pipeline. The build step for your Cl pipeline integration testing requires access to APIs inside your private VPC network. Your security team requires that you do not expose API traffic publicly. You need to implement a solution that minimizes management overhead. What should you do?
A. Use Cloud Build private pools to connect to the private VPC.
B. Use Cloud Build to create a Compute Engine instance in the private VPC. Run the integration tests on the VM by using a startup script.
C. Use Cloud Build as a pipeline runner. Configure a cross-region internal Application Load Balancer for API access.
D. Use Cloud Build as a pipeline runner. Configure a global external Application Load Balancer with a Google Cloud Armor policy for API access.


Question 186
You are deploying a new version of your application to a multi-zone Google Kubernetes Engine (GKE) cluster. The deployment is progressing smoothly, but you notice that some Pods in a specific zone are experiencing higher error rates. You need to selectively roll back the update for the Pods experiencing errors with minimal impact to users. What should you do?
A. Scale down the Pods in the affected zone. Redeploy the new version of the application.
B. Drain the affected nodes. Redeploy the new version of the application to the remaining nodes.
C. Modify the Deployment to use the Pod template from the previous version of your application. Perform a rolling update to replace the Pods in the affected zone.
D. Use the kubectl rollout undo command to roll back the entire deployment. Redeploy the new version of the application, excluding the affected zone.

Question 187
You work for a healthcare company and regulations require you to create all resources in a United States-based region. You attempted to create a secret in Secret Manager but received the following error message:
Constraint constraints/gcp.resourceLocations violated for [orgpolicy:projects/000000] attempting to create a secret in [global]
You need to resolve the error while remaining compliant with regulations. What should you do?
A. Remove the organization policy referenced in the error message.
B. Create the secret with an automatic replication policy.
C. Create the secret with a user-managed replication policy.
D. Add the global region to the organization policy referenced in the error message.

Question 188
You are responsible for creating development environments for your company's development team. You want to create environments with identical IDEs for all developers while ensuring that these environments are not exposed to public networks. You need to choose the most cost-effective solution without impacting developer productivity. What should you do?
A. Create multiple Compute Engine VM instances with a public IP address and use a Public NAT gateway. Configure an instance schedule to shut down the VMs.
B. Create multiple Compute Engine VM instances without a public IP address. Configure an instance schedule to shut down the VMs.
C. Create a Cloud Workstations private cluster. Create a workstation configuration with an idieTimeour parameter.
D. Create a Cloud Workstations private cluster. Create a workstation configuration with a runningTimeout parameter.

Question 189
Your company uses Cloud Deploy with multiple delivery pipelines for deploying applications to different environments. Your development team currently lacks access to any of these pipelines. You need to grant the team access to only the development delivery pipeline, while following Google-recommended practices. What should you do?
A. In the Google Cloud console, grant the development team the roles/clouddeploy.operator role. Add deny conditions to all pipelines other than the development delivery pipeline.
B. In the Google Cloud console, create a custom IAM role with all clouddeploy.automations.* permissions and an allow policy for only the development delivery pipeline. Grant this IAM role to the development team.
C. Grant the development team the roles/clouddeploy.operator role in a policy file. Apply the policy file to the development target.
D. Grant the development team the roles/clouddeploy.developer role in a policy file. Apply this policy file to the development delivery pipeline.

Question 190
Your company has recently experienced several production service issues. You need to create a Cloud Monitoring dashboard to troubleshoot the issues, and you want to use the dashboard to distinguish between failures in your own service and those caused by a Google Cloud service that you use. What should you do?
A. Create a log-based metric to track cloud service errors, and display the metric on the dashboard.
B. Create a logs widget to display system errors from Cloud Logging on the dashboard.
C. Create an alerting policy for the system error metrics.
D. Enable Personalized Service Health annotations on the dashboard.



Premium Version