Win IT Exam with Last Dumps 2025


Google Professional-Cloud-Developer Exam

Page 23/38
Viewing Questions 221 230 out of 375 Questions
60.53%

Question 221
You are monitoring a web application that is written in Go and deployed in Google Kubernetes Engine. You notice an increase in CPU and memory utilization. You need to determine which source code is consuming the most CPU and memory resources. What should you do?
A. Download, install, and start the Snapshot Debugger agent in your VM. Take debug snapshots of the functions that take the longest time. Review the call stack frame, and identify the local variables at that level in the stack.
B. Import the Cloud Profiler package into your application, and initialize the Profiler agent. Review the generated flame graph in the Google Cloud console to identify time-intensive functions.
C. Import OpenTelemetry and Trace export packages into your application, and create the trace provider.
Review the latency data for your application on the Trace overview page, and identify where bottlenecks are occurring.
D. Create a Cloud Logging query that gathers the web application's logs. Write a Python script that calculates the difference between the timestamps from the beginning and the end of the application's longest functions to identity time-intensive functions.

Question 222
You have a container deployed on Google Kubernetes Engine. The container can sometimes be slow to launch, so you have implemented a liveness probe. You notice that the liveness probe occasionally fails on launch. What should you do?
A. Add a startup probe.
B. Increase the initial delay for the liveness probe.
C. Increase the CPU limit for the container.
D. Add a readiness probe.

Question 223
You work for an organization that manages an ecommerce site. Your application is deployed behind a global HTTP(S) load balancer. You need to test a new product recommendation algorithm. You plan to use A/B testing to determine the new algorithm’s effect on sales in a randomized way. How should you test this feature?
A. Split traffic between versions using weights.
B. Enable the new recommendation feature flag on a single instance.
C. Mirror traffic to the new version of your application.
D. Use HTTP header-based routing.

Question 224
You plan to deploy a new application revision with a Deployment resource to Google Kubernetes Engine (GKE) in production. The container might not work correctly. You want to minimize risk in case there are issues after deploying the revision. You want to follow Google-recommended best practices. What should you do?
A. Perform a rolling update with a PodDisruptionBudget of 80%.
B. Perform a rolling update with a HorizontalPodAutoscaler scale-down policy value of 0.
C. Convert the Deployment to a StatefulSet, and perform a rolling update with a PodDisruptionBudget of 80%.
D. Convert the Deployment to a StatefulSet, and perform a rolling update with a HorizontalPodAutoscaler scale-down policy value of 0.

Question 225
Before promoting your new application code to production, you want to conduct testing across a variety of different users. Although this plan is risky, you want to test the new version of the application with production users and you want to control which users are forwarded to the new version of the application based on their operating system. If bugs are discovered in the new version, you want to roll back the newly deployed version of the application as quickly as possible.
What should you do?
A. Deploy your application on Cloud Run. Use traffic splitting to direct a subset of user traffic to the new version based on the revision tag.
B. Deploy your application on Google Kubernetes Engine with Anthos Service Mesh. Use traffic splitting to direct a subset of user traffic to the new version based on the user-agent header.
C. Deploy your application on App Engine. Use traffic splitting to direct a subset of user traffic to the new version based on the IP address.
D. Deploy your application on Compute Engine. Use Traffic Director to direct a subset of user traffic to the new version based on predefined weights.


Question 226
Your team is writing a backend application to implement the business logic for an interactive voice response (IVR) system that will support a payroll application. The IVR system has the following technical characteristics:
• Each customer phone call is associated with a unique IVR session.
• The IVR system creates a separate persistent gRPC connection to the backend for each session.
• If the connection is interrupted, the IVR system establishes a new connection, causing a slight latency for that call.
You need to determine which compute environment should be used to deploy the backend application. Using current call data, you determine that:
• Call duration ranges from 1 to 30 minutes.
• Calls are typically made during business hours.
• There are significant spikes of calls around certain known dates (e.g., pay days), or when large payroll changes occur.
You want to minimize cost, effort, and operational overhead. Where should you deploy the backend application?
A. Compute Engine
B. Google Kubernetes Engine cluster in Standard mode
C. Cloud Functions
D. Cloud Run

Question 227
You are developing an application hosted on Google Cloud that uses a MySQL relational database schema. The application will have a large volume of reads and writes to the database and will require backups and ongoing capacity planning. Your team does not have time to fully manage the database but can take on small administrative tasks. How should you host the database?
A. Configure Cloud SQL to host the database, and import the schema into Cloud SQL.
B. Deploy MySQL from the Google Cloud Marketplace to the database using a client, and import the schema.
C. Configure Bigtable to host the database, and import the data into Bigtable.
D. Configure Cloud Spanner to host the database, and import the schema into Cloud Spanner.
E. Configure Firestore to host the database, and import the data into Firestore.

Question 228
You are developing a new web application using Cloud Run and committing code to Cloud Source Repositories. You want to deploy new code in the most efficient way possible. You have already created a Cloud Build YAML file that builds a container and runs the following command: gcloud run deploy. What should you do next?
A. Create a Pub/Sub topic to be notified when code is pushed to the repository. Create a Pub/Sub trigger that runs the build file when an event is published to the topic.
B. Create a build trigger that runs the build file in response to a repository code being pushed to the development branch.
C. Create a webhook build trigger that runs the build file in response to HTTP POST calls to the webhook URL.
D. Create a Cron job that runs the following command every 24 hours: gcloud builds submit.

Question 229
You are a developer at a large organization. You are deploying a web application to Google Kubernetes Engine (GKE). The DevOps team has built a CI/CD pipeline that uses Cloud Deploy to deploy the application to Dev, Test, and Prod clusters in GKE. After Cloud Deploy successfully deploys the application to the Dev cluster, you want to automatically promote it to the Test cluster. How should you configure this process following Google-recommended best practices?
A. 1. Create a Cloud Build trigger that listens for SUCCEEDED Pub/Sub messages from the clouddeploy-operations topic.
2. Configure Cloud Build to include a step that promotes the application to the Test cluster.
B. 1. Create a Cloud Function that calls the Google Cloud Deploy API to promote the application to the Test cluster.
2. Configure this function to be triggered by SUCCEEDED Pub/Sub messages from the cloud-builds topic.
C. 1. Create a Cloud Function that calls the Google Cloud Deploy API to promote the application to the Test cluster.
2. Configure this function to be triggered by SUCCEEDED Pub/Sub messages from the clouddeploy-operations topic.
D. 1. Create a Cloud Build pipeline that uses the gke-deploy builder.
2. Create a Cloud Build trigger that listens for SUCCEEDED Pub/Sub messages from the cloud-builds topic.
3. Configure this pipeline to run a deployment step to the Test cluster.

Question 230
Your application is running as a container in a Google Kubernetes Engine cluster. You need to add a secret to your application using a secure approach. What should you do?
A. Create a Kubernetes Secret, and pass the Secret as an environment variable to the container.
B. Enable Application-layer Secret Encryption on the cluster using a Cloud Key Management Service (KMS) key.
C. Store the credential in Cloud KMS. Create a Google service account (GSA) to read the credential from Cloud KMS. Export the GSA as a .json file, and pass the .json file to the container as a volume which can read the credential from Cloud KMS.
D. Store the credential in Secret Manager. Create a Google service account (GSA) to read the credential from Secret Manager. Create a Kubernetes service account (KSA) to run the container. Use Workload Identity to configure your KSA to act as a GSA.



Premium Version