Question 121
You are managing a Cloud SQL for PostgreSQL instance in Google Cloud. You have a primary instance in region 1 and a read replica in region 2. After a failure of region 1, you need to make the Cloud SQL instance available again. You want to minimize data loss and follow Google-recommended practices. What should you do?
A. Restore the Cloud SQL instance from the automatic backups in region 3.
B. Restore the Cloud SQL instance from the automatic backups in another zone in region 1.
C. Check "Lag Bytes" in the monitoring dashboard for the primary instance in the read replica instance. Check the replication status using pg_catalog.pg_last_wal_receive_lsn(). Then, fail over to region 2 by promoting the read replica instance.
D. Check your instance operational log for the automatic failover status. Look for time, type, and status of the operations. If the failover operation is successful, no action is necessary. Otherwise, manually perform gcloud sql instances failover .
Question 122
You need to issue a new server certificate because your old one is expiring. You need to avoid a restart of your Cloud SQL for MySQL instance. What should you do in your Cloud SQL instance?
A. Issue a rollback, and download your server certificate.
B. Create a new client certificate, and download it.
C. Create a new server certificate, and download it.
D. Reset your SSL configuration, and download your server certificate.
Question 123
Your company is migrating all legacy applications to Google Cloud. All on-premises applications are using legacy Oracle 12c databases with Oracle Real Application Cluster (RAC) for high availability (HA) and Oracle Data Guard for disaster recovery. You need a solution that requires minimal code changes, provides the same high availability you have today on-premises, and supports a low latency network for migrated legacy applications. What should you do?
A. Migrate the databases to Cloud Spanner.
B. Migrate the databases to Cloud SQL, and enable a standby database.
C. Migrate the databases to Compute Engine using regional persistent disks.
D. Migrate the databases to Bare Metal Solution for Oracle.
Question 124
Your company is evaluating Google Cloud database options for a mission-critical global payments gateway application. The application must be available 24/7 to users worldwide, horizontally scalable, and support open source databases. You need to select an automatically shardable, fully managed database with 99.999% availability and strong transactional consistency. What should you do?
A. Select Bare Metal Solution for Oracle.
B. Select Cloud SQL.
C. Select Bigtable.
D. Select Cloud Spanner.
Question 125
You are a DBA of Cloud SQL for PostgreSQL. You want the applications to have password-less authentication for read and write access to the database. Which authentication mechanism should you use?
A. Use Identity and Access Management (IAM) authentication.
B. Use Managed Active Directory authentication.
C. Use Cloud SQL federated queries.
D. Use PostgreSQL database's built-in authentication.
Question 126
You are migrating your 2 TB on-premises PostgreSQL cluster to Compute Engine. You want to set up your new environment in an Ubuntu virtual machine instance in Google Cloud and seed the data to a new instance. You need to plan your database migration to ensure minimum downtime. What should you do?
A. 1. Take a full export while the database is offline.
2. Create a bucket in Cloud Storage.
3. Transfer the dump file to the bucket you just created.
4. Import the dump file into the Google Cloud primary server.
B.1. Take a full export while the database is offline.
2. Create a bucket in Cloud Storage.
3. Transfer the dump file to the bucket you just created.
4. Restore the backup into the Google Cloud primary server.
C. 1. Take a full backup while the database is online.
2. Create a bucket in Cloud Storage.
3. Transfer the backup to the bucket you just created.
4. Restore the backup into the Google Cloud primary server.
5. Create a recovery.conf file in the $PG_DATA directory.
6. Stop the source database.
7. Transfer the write ahead logs to the bucket you created before.
8. Start the PostgreSQL service.
9. Wait until Google Cloud primary server syncs with the running primary server.
D. 1. Take a full export while the database is online.
2. Create a bucket in Cloud Storage.
3. Transfer the dump file and write-ahead logs to the bucket you just created.
4. Restore the dump file into the Google Cloud primary server.
5. Create a recovery.conf file in the $PG_DATA directory.
6. Stop the source database.
7. Transfer the write-ahead logs to the bucket you created before.
8. Start the PostgreSQL service.
9. Wait until the Google Cloud primary server syncs with the running primary server.
Question 127
You have deployed a Cloud SQL for SQL Server instance. In addition, you created a cross-region read replica for disaster recovery (DR) purposes. Your company requires you to maintain and monitor a recovery point objective (RPO) of less than 5 minutes. You need to verify that your cross-region read replica meets the allowed RPO. What should you do?
A. Use Cloud SQL instance monitoring.
B. Use the Cloud Monitoring dashboard with available metrics from Cloud SQL.
C. Use Cloud SQL logs.
D. Use the SQL Server Always On Availability Group dashboard.
Question 128
You want to migrate an on-premises mission-critical PostgreSQL database to Cloud SQL. The database must be able to withstand a zonal failure with less than five minutes of downtime and still not lose any transactions. You want to follow Google-recommended practices for the migration. What should you do?
A. Take nightly snapshots of the primary database instance, and restore them in a secondary zone.
B. Build a change data capture (CDC) pipeline to read transactions from the primary instance, and replicate them to a secondary instance.
C. Create a read replica in another region, and promote the read replica if a failure occurs.
D. Enable high availability (HA) for the database to make it regional.
Question 129
You are migrating an on-premises application to Compute Engine and Cloud SQL. The application VMs will live in their own project, separate from the Cloud SQL instances which have their own project. What should you do to configure the networks?
A. Create a new VPC network in each project, and use VPC Network Peering to connect the two together.
B. Create a Shared VPC that both the application VMs and Cloud SQL instances will use.
C. Use the default networks, and leverage Cloud VPN to connect the two together.
D. Place both the application VMs and the Cloud SQL instances in the default network of each project.
Question 130
Your DevOps team is using Terraform to deploy applications and Cloud SQL databases. After every new application change is rolled out, the environment is torn down and recreated, and the persistent database layer is lost. You need to prevent the database from being dropped. What should you do?
A. Set Terraform deletion_protection to true.
B. Rerun terraform apply.
C. Create a read replica.
D. Use point-in-time-recovery (PITR) to recover the database.