Question 151
You are planning the migration of a large Oracle database to AlloyDB for PostgreSQL. The database contains a large amount of application logic written as stored procedures that needs to be migrated to the target database. You want to minimize both migration effort and downtime for the migration. What should you do?
A. Use Ora2pg for schema and code conversion and data migration.
B. Use Ora2pg for schema and code conversion. Use the oracle_fdw extension in AlloyDB and replicate data from source to destination by using CREATE TABLE AS SELECT statements.
C. Use Database Migration Service. Set up a conversion workspace for schema and code conversion. Create a migration job to perform backfill and change data capture to replicate data from source to destination.
D. Use Database Migration Service. Set up a legacy conversion workspace for schema and code conversion. Create a migration job to perform backfill and change data capture to replicate data from source to destination.
Question 152
Your company uses a custom application to service thousands of users. The application is running on a Compute Engine and a Cloud SQL for PostgreSQL database. The company requires database passwords to be changed every 60 days. You need to ensure that the credentials used by the web application to connect to the database are managed securely. What should you do?
A. 1. Store the credentials in an encrypted text file in the application.
2. Use Cloud Key Management Service (Cloud KMS) to store the key for decrypting the text file.
3. Modify the application to decrypt the text file and retrieve the credentials on startup.
4. Update the text file every 60 days.
B. 1. Store the credentials to the database in Secret Manager.
2. Modify the application to retrieve the credentials from Secret Manager on startup.
3. Configure the rotation interval to 60 days.
C. 1. Store the credentials in a text file in a Cloud Storage bucket.
2. Modify the application to download the text file and retrieve the credentials on startup.
3. Update the text file every 60 days.
D. 1. Configure IAM database authentication for the application to connect to the database.
2. Create an IAM user and map it to a separate database user for each application user.
3. Require users to update their passwords every 60 days.
Question 153
You are a DBA at a retail company. The production databases are running in Cloud SQL for MySQL Enterprise Plus edition, version 8.0.34, in the us-centrall region. You need to set up and test disaster recovery (DR) with zero data loss in the us-west1 region. What should you do?
A. Use advanced DR by setting up a cascading read replica in the us-west1 region, and designate it as the failover DR replica. Test switchover by using the gcloud switchover command.
B. Create a cross-region read replica, version 8.0.37, in the us-west1 region. Designate it as the failover DR replica, and test switchover by using the gcloud switchover command.
C. Create a cross-region read replica version 8.0.34, in the us-west1 region. Designate it as the failover DR replica, and test switchover by using the gcloud switchover command.
D. Create a cross-region read replica, version 8.0.34, in the us-west1 region. Test switchover by using the gcloud promote-replica with failover command.
Question 154
You are designing the backup and recovery strategy of a Cloud SQL for MySQL database that serves a write heavy application. You need to be able to restore the database to any point in time within the past five days while minimizing the performance impact of database backups. What should you do?
A. Enable automatic backup and point in time recovery. Configure the backup window during periods of low database traffic. Set backup and transaction log retention to at least five days.
B. Schedule hourly on-demand backups and enable point in time recovery. Delete any delete backups that were taken over five days ago.
C. Schedule a database export job to run every hour from a read replica instance. Retain the export dumps for at least five days.
D. Schedule a database export job to run every hour. Use serverless export. Retain the export dumps for at least five days.
Question 155
Your company’s rapidly growing ecommerce application uses Cloud SQL for MySQL. Performance issues are surfacing during peak traffic periods. You need to ensure that the database can handle the load while minimizing downtime. What should you do?
A. Use read replicas for query offloading, configure automatic failover, and test failover procedures regularly.
B. Vertically scale the primary instance by significantly increasing compute resources, such as vCPUs and memory.
C. Increase storage size on the existing instance and implement client-side caching for frequently accessed data.
D. Migrate to a Cloud SQL for PostgreSQL database for better performance during high load.
Question 156
Your organization’s databases are running in Cloud SQL. Due to data residency regulations, your organization mandates that backups are stored in a specific region. You are also required to make sure that the databases have monthly full backups, which are retained until your Cloud SQL instance is deleted, along with daily backups. What should you do?
A. Schedule automated backups for Cloud SQL with a retention period of 365 days.
B. Schedule daily and monthly on-demand backups.
C. Schedule automated backups, and create on-demand monthly backups to custom backup locations.
D. Create an Organization Policy Service resource location constraint. Schedule automated backups with the retention period of 30 days.
Question 157
Your company is rapidly expanding its user base across North America, nearly doubling in the last 6 months. This expansion causes a substantial increase in query volume on your mission-critical Cloud SQL database. This has led to noticeable performance issues and slower query response times. You suspect that your Cloud SQL instance may not be able to handle the incremental load. You need to identify the root cause of this performance bottleneck and ensure that your database can scale with your growing user base. What should you do?
A. Migrate the database to Spanner.
B. Evaluate the application connection pooling configuration settings.
C. Review Cloud SQL System Insights for the instance, and analyze CPU, memory, and storage utilization metrics.
D. Create two Cloud SQL instances, and split the workload between them.
Question 158
You want to migrate an on-premises MySQL database to Cloud SQL. The on-premises database currently has 16 cores and 64 GB of RAM and averages 75% CPU utilization to support an application with over 100.000 tables. You need to specify the most cost-effective machine size you need when migrating to Cloud SQL for MySQL. What should you do?
A. Select a Cloud SQL for MySQL database with a machine configuration of 12 cores and 48 GB of RAM.
B. Select a Cloud SQL for MySQL database with a machine configuration of 16 cores and 64 GB of RAM.
C. Select a Cloud SQL for MySQL database with a machine configuration of 32 cores and 256 GB of RAM.
D. Select a Cloud SQL for MySQL database with a machine configuration of 64 cores and 768 GB of RAM.
Question 159
You have a regional Spanner instance with no autoscaler that is serving a production workload. You observed a surge in write activities to the database during an ongoing promotional event. You received an alert that the database is close to the storage limit. Based on the storage utilization trend, the database will run out of storage in the next few hours. You want to resolve this issue as soon as possible. What should you do?
A. Create a custom instance configuration and add a custom read-only replica to the Spanner instance.
B. Increase the compute capacity of the Spanner instance.
C. Move the Spanner instance to a multi-regional configuration.
D. Archive and delete historical data from the database.
Question 160
Your organization deploys a high-volume, low-latency sensor data ingestion system. Resilience is crucial, and the workload is expected to grow significantly over time. You need to design a database tier on Google Cloud to ensure faster performance and high availability, while following Google-recommended practices. What should you do?
A. Deploy multiple Bigtable clusters, shard data manually across them, and set up the application logic for fault tolerance.
B. Deploy a single-zone Bigtable cluster and optimize the application code for maximum write throughput.
C. Deploy a single-region Bigtable instance and increase the nodes to maximize data durability.
D. Provision a multi-zone Bigtable cluster, configure replication, perform an application stress-test, and set up performance monitoring.