Win IT Exam with Last Dumps 2025


Google Professional-Cloud-Devops Exam

Page 16/21
Viewing Questions 151 160 out of 201 Questions
76.19%

Question 151
Your company runs applications in Google Kubernetes Engine (GKE) that are deployed following a GitOps methodology. Application developers frequently create cloud resources to support their applications. You want to give developers the ability to manage infrastructure as code, while ensuring that you follow Google-recommended practices. You need to ensure that infrastructure as code reconciles periodically to avoid configuration drift. What should you do?
A. Install and configure Config Connector in Google Kubernetes Engine (GKE).
B. Configure Cloud Build with a Terraform builder to execute terraform plan and terraform apply commands.
C. Create a Pod resource with a Terraform docker image to execute terraform plan and terraform apply commands.
D. Create a Job resource with a Terraform docker image to execute terraform plan and terraform apply commands.

Question 152
You are designing a system with three different environments: development, quality assurance (QA), and production. Each environment will be deployed with Terraform and has a Google Kubernetes Engine (GKE) cluster created so that application teams can deploy their applications. Anthos Config Management will be used and templated to deploy infrastructure level resources in each GKE cluster. All users (for example, infrastructure operators and application owners) will use GitOps. How should you structure your source control repositories for both Infrastructure as Code (IaC) and application code?
A. • Cloud Infrastructure (Terraform) repository is shared: different directories are different environments
• GKE Infrastructure (Anthos Config Management Kustomize manifests) repository is shared: different overlay directories are different environments
• Application (app source code) repositories are separated: different branches are different features
B. • Cloud Infrastructure (Terraform) repository is shared: different directories are different environments
• GKE Infrastructure (Anthos Config Management Kustomize manifests) repositories are separated: different branches are different environments
• Application (app source code) repositories are separated: different branches are different features
C. • Cloud Infrastructure (Terraform) repository is shared: different branches are different environments
• GKE Infrastructure (Anthos Config Management Kustomize manifests) repository is shared: different overlay directories are different environments
• Application (app source code) repository is shared: different directories are different features
D. • Cloud Infrastructure (Terraform) repositories are separated: different branches are different environments
• GKE Infrastructure (Anthos Config Management Kustomize manifests) repositories are separated: different overlay directories are different environments
• Application (app source code) repositories are separated: different branches are different

Question 153
You are configuring Cloud Logging for a new application that runs on a Compute Engine instance with a public IP address. A user-managed service account is attached to the instance. You confirmed that the necessary agents are running on the instance but you cannot see any log entries from the instance in Cloud Logging. You want to resolve the issue by following Google-recommended practices. What should you do?
A. Export the service account key and configure the agents to use the key.
B. Update the instance to use the default Compute Engine service account.
C. Add the Logs Writer role to the service account.
D. Enable Private Google Access on the subnet that the instance is in.

Question 154
As a Site Reliability Engineer, you support an application written in Go that runs on Google Kubernetes Engine (GKE) in production. After releasing a new version of the application, you notice the application runs for about 15 minutes and then restarts. You decide to add Cloud Profiler to your application and now notice that the heap usage grows constantly until the application restarts. What should you do?
A. Increase the CPU limit in the application deployment.
B. Add high memory compute nodes to the cluster.
C. Increase the memory limit in the application deployment.
D. Add Cloud Trace to the application, and redeploy.

Question 155
You are deploying a Cloud Build job that deploys Terraform code when a Git branch is updated. While testing, you noticed that the job fails. You see the following error in the build logs:
Initializing the backend...
Error: Failed to get existing workspaces: querying Cloud Storage failed: googleapi: Error 403
You need to resolve the issue by following Google-recommended practices. What should you do?
A. Change the Terraform code to use local state.
B. Create a storage bucket with the name specified in the Terraform configuration.
C. Grant the roles/owner Identity and Access Management (IAM) role to the Cloud Build service account on the project.
D. Grant the roles/storage.objectAdmin Identity and Access Management (1AM) role to the Cloud Build service account on the state file bucket.


Question 156
Your company runs applications in Google Kubernetes Engine (GKE). Several applications rely on ephemeral volumes. You noticed some applications were unstable due to the DiskPressure node condition on the worker nodes. You need to identify which Pods are causing the issue, but you do not have execute access to workloads and nodes. What should you do?
A. Check the node/ephemeral_storage/used_bytes metric by using Metrics Explorer.
B. Check the container/ephemeral_storage/used_bytes metric by using Metrics Explorer.
C. Locate all the Pods with emptyDir volumes. Use the df -h command to measure volume disk usage.
D. Locate all the Pods with emptyDir volumes. Use the df -sh * command to measure volume disk usage.

Question 157
You are designing a new Google Cloud organization for a client. Your client is concerned with the risks associated with long-lived credentials created in Google Cloud. You need to design a solution to completely eliminate the risks associated with the use of JSON service account keys while minimizing operational overhead. What should you do?
A. Apply the constraints/iam.disableServiceAccountKevCreation constraint to the organization.
B. Use custom versions of predefined roles to exclude all iam.serviceAccountKeys.* service account role permissions.
C. Apply the constraints/iam.disableServiceAccountKeyUpload constraint to the organization.
D. Grant the roles/iam.serviceAccountKeyAdmin IAM role to organization administrators only.

Question 158
You are designing a deployment technique for your applications on Google Cloud. As part of your deployment planning, you want to use live traffic to gather performance metrics for new versions of your applications. You need to test against the full production load before your applications are launched. What should you do?
A. Use A/B testing with blue/green deployment.
B. Use canary testing with continuous deployment.
C. Use canary testing with rolling updates deployment.
D. Use shadow testing with continuous deployment.

Question 159
Your Cloud Run application writes unstructured logs as text strings to Cloud Logging. You want to convert the unstructured logs to JSON-based structured logs. What should you do?
A. Modify the application to use Cloud Logging software development kit (SDK), and send log entries with a jsonPayload field.
B. Install a Fluent Bit sidecar container, and use a JSON parser.
C. Install the log agent in the Cloud Run container image, and use the log agent to forward logs to Cloud Logging.
D. Configure the log agent to convert log text payload to JSON payload.

Question 160
Your company is planning a large marketing event for an online retailer during the holiday shopping season. You are expecting your web application to receive a large volume of traffic in a short period. You need to prepare your application for potential failures during the event. What should you do? (Choose two.)
A. Configure Anthos Service Mesh on the application to identify issues on the topology map.
B. Ensure that relevant system metrics are being captured with Cloud Monitoring, and create alerts at levels of interest.
C. Review your increased capacity requirements and plan for the required quota management.
D. Monitor latency of your services for average percentile latency.
E. Create alerts in Cloud Monitoring for all common failures that your application experiences.



Premium Version