Question 191
You have an application deployed in Google Kubernetes Engine (GKE). You need to update the application to make authorized requests to Google Cloud managed services. You want this to be a one-time setup, and you need to follow security best practices of auto-rotating your security keys and storing them in an encrypted store. You already created a service account with appropriate access to the Google Cloud service. What should you do next?
A. Assign the Google Cloud service account to your GKE Pod using Workload Identity.
B. Export the Google Cloud service account, and share it with the Pod as a Kubernetes Secret.
C. Export the Google Cloud service account, and embed it in the source code of the application.
D. Export the Google Cloud service account, and upload it to HashiCorp Vault to generate a dynamic service account for your application.
Question 192
You are planning to deploy hundreds of microservices in your Google Kubernetes Engine (GKE) cluster. How should you secure communication between the microservices on GKE using a managed service?
A. Use global HTTP(S) Load Balancing with managed SSL certificates to protect your services
B. Deploy open source Istio in your GKE cluster, and enable mTLS in your Service Mesh
C. Install cert-manager on GKE to automatically renew the SSL certificates.
D. Install Anthos Service Mesh, and enable mTLS in your Service Mesh.
Question 193
You are developing an application that will store and access sensitive unstructured data objects in a Cloud Storage bucket. To comply with regulatory requirements, you need to ensure that all data objects are available for at least 7 years after their initial creation. Objects created more than 3 years ago are accessed very infrequently (less than once a year). You need to configure object storage while ensuring that storage cost is optimized. What should you do? (Choose two.)
A. Set a retention policy on the bucket with a period of 7 years.
B. Use IAM Conditions to provide access to objects 7 years after the object creation date.
C. Enable Object Versioning to prevent objects from being accidentally deleted for 7 years after object creation.
D. Create an object lifecycle policy on the bucket that moves objects from Standard Storage to Archive Storage after 3 years.
E. Implement a Cloud Function that checks the age of each object in the bucket and moves the objects older than 3 years to a second bucket with the Archive Storage class. Use Cloud Scheduler to trigger the Cloud Function on a daily schedule.
Question 194
You are developing an application using different microservices that must remain internal to the cluster. You want the ability to configure each microservice with a specific number of replicas. You also want the ability to address a specific microservice from any other microservice in a uniform way, regardless of the number of replicas the microservice scales to. You plan to implement this solution on Google Kubernetes Engine. What should you do?
A. Deploy each microservice as a Deployment. Expose the Deployment in the cluster using a Service, and use the Service DNS name to address it from other microservices within the cluster.
B. Deploy each microservice as a Deployment. Expose the Deployment in the cluster using an Ingress, and use the Ingress IP address to address the Deployment from other microservices within the cluster.
C. Deploy each microservice as a Pod. Expose the Pod in the cluster using a Service, and use the Service DNS name to address the microservice from other microservices within the cluster.
D. Deploy each microservice as a Pod. Expose the Pod in the cluster using an Ingress, and use the Ingress IP address to address the Pod from other microservices within the cluster.
Question 195
You are building an application that uses a distributed microservices architecture. You want to measure the performance and system resource utilization in one of the microservices written in Java. What should you do?
A. Instrument the service with Cloud Profiler to measure CPU utilization and method-level execution times in the service.
B. Instrument the service with Debugger to investigate service errors.
C. Instrument the service with Cloud Trace to measure request latency.
D. Instrument the service with OpenCensus to measure service latency, and write custom metrics to Cloud Monitoring.
Question 196
Your team is responsible for maintaining an application that aggregates news articles from many different sources. Your monitoring dashboard contains publicly accessible real-time reports and runs on a Compute Engine instance as a web application. External stakeholders and analysts need to access these reports via a secure channel without authentication. How should you configure this secure channel?
A. Add a public IP address to the instance. Use the service account key of the instance to encrypt the traffic.
B. Use Cloud Scheduler to trigger Cloud Build every hour to create an export from the reports. Store the reports in a public Cloud Storage bucket.
C. Add an HTTP(S) load balancer in front of the monitoring dashboard. Configure Identity-Aware Proxy to secure the communication channel.
D. Add an HTTP(S) load balancer in front of the monitoring dashboard. Set up a Google-managed SSL certificate on the load balancer for traffic encryption.
Question 197
You are planning to add unit tests to your application. You need to be able to assert that published Pub/Sub messages are processed by your subscriber in order. You want the unit tests to be cost-effective and reliable. What should you do?
A. Implement a mocking framework.
B. Create a topic and subscription for each tester.
C. Add a filter by tester to the subscription.
D. Use the Pub/Sub emulator.
Question 198
You have an application deployed in Google Kubernetes Engine (GKE) that reads and processes Pub/Sub messages. Each Pod handles a fixed number of messages per minute. The rate at which messages are published to the Pub/Sub topic varies considerably throughout the day and week, including occasional large batches of messages published at a single moment.
You want to scale your GKE Deployment to be able to process messages in a timely manner. What GKE feature should you use to automatically adapt your workload?
A. Vertical Pod Autoscaler in Auto mode
B. Vertical Pod Autoscaler in Recommendation mode
C. Horizontal Pod Autoscaler based on an external metric
D. Horizontal Pod Autoscaler based on resources utilization
Question 199
You are using Cloud Run to host a web application. You need to securely obtain the application project ID and region where the application is running and display this information to users. You want to use the most performant approach. What should you do?
A. Use HTTP requests to query the available metadata server at the http://metadata.google.internal/ endpoint with the Metadata-Flavor: Google header.
B. In the Google Cloud console, navigate to the Project Dashboard and gather configuration details. Navigate to the Cloud Run “Variables & Secrets” tab, and add the desired environment variables in Key:Value format.
C. In the Google Cloud console, navigate to the Project Dashboard and gather configuration details. Write the application configuration information to Cloud Run's in-memory container filesystem.
D. Make an API call to the Cloud Asset Inventory API from the application and format the request to include instance metadata.
Question 200
You need to deploy resources from your laptop to Google Cloud using Terraform. Resources in your Google Cloud environment must be created using a service account. Your Cloud Identity has the roles/iam.serviceAccountTokenCreator Identity and Access Management (IAM) role and the necessary permissions to deploy the resources using Terraform. You want to set up your development environment to deploy the desired resources following Google-recommended best practices. What should you do?
A. 1. Download the service account’s key file in JSON format, and store it locally on your laptop.
2. Set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of your downloaded key file.
B. 1. Run the following command from a command line: gcloud config set auth/impersonate_service_account [email protected].
2. Set the GOOGLE_OAUTH_ACCESS_TOKEN environment variable to the value that is returned by the gcloud auth print-access-token command.
C. 1. Run the following command from a command line: gcloud auth application-default login.
2. In the browser window that opens, authenticate using your personal credentials.
D. 1. Store the service account's key file in JSON format in Hashicorp Vault.
2. Integrate Terraform with Vault to retrieve the key file dynamically, and authenticate to Vault using a short-lived access token.