Question 101
You are managing a set of Cloud SQL databases in Google Cloud. Regulations require that database backups reside in the region where the database is created. You want to minimize operational costs and administrative effort. What should you do?
A. Configure the automated backups to use a regional Cloud Storage bucket as a custom location.
B. Use the default configuration for the automated backups location.
C. Disable automated backups, and create an on-demand backup routine to a regional Cloud Storage bucket.
D. Disable automated backups, and configure serverless exports to a regional Cloud Storage bucket.
Question 102
Your ecommerce application connecting to your Cloud SQL for SQL Server is expected to have additional traffic due to the holiday weekend. You want to follow Google-recommended practices to set up alerts for CPU and memory metrics so you can be notified by text message at the first sign of potential issues. What should you do?
A. Use a Cloud Function to pull CPU and memory metrics from your Cloud SQL instance and to call a custom service to send alerts.
B. Use Error Reporting to monitor CPU and memory metrics and to configure SMS notification channels.
C. Use Cloud Logging to set up a log sink for CPU and memory metrics and to configure a sink destination to send a message to Pub/Sub.
D. Use Cloud Monitoring to set up an alerting policy for CPU and memory metrics and to configure SMS notification channels.
Question 103
You finished migrating an on-premises MySQL database to Cloud SQL. You want to ensure that the daily export of a table, which was previously a cron job running on the database server, continues. You want the solution to minimize cost and operations overhead. What should you do?
A. Use Cloud Scheduler and Cloud Functions to run the daily export.
B. Create a streaming Datatlow job to export the table.
C. Set up Cloud Composer, and create a task to export the table daily.
D. Run the cron job on a Compute Engine instance to continue the export.
Question 104
Your organization needs to migrate a critical, on-premises MySQL database to Cloud SQL for MySQL. The on-premises database is on a version of MySQL that is supported by Cloud SQL and uses the InnoDB storage engine. You need to migrate the database while preserving transactions and minimizing downtime. What should you do?
A. 1. Use Database Migration Service to connect to your on-premises database, and choose continuous replication.
2. After the on-premises database is migrated, promote the Cloud SQL for MySQL instance, and connect applications to your Cloud SQL instance.
B. 1. Build a Cloud Data Fusion pipeline for each table to migrate data from the on-premises MySQL database to Cloud SQL for MySQL.
2. Schedule downtime to run each Cloud Data Fusion pipeline.
3. Verify that the migration was successful.
4. Re-point the applications to the Cloud SQL for MySQL instance.
C. 1. Pause the on-premises applications.
2. Use the mysqldump utility to dump the database content in compressed format.
3. Run gsutil –m to move the dump file to Cloud Storage.
4. Use the Cloud SQL for MySQL import option.
5. After the import operation is complete, re-point the applications to the Cloud SQL for MySQL instance.
D. 1 Pause the on-premises applications.
2. Use the mysqldump utility to dump the database content in CSV format.
3. Run gsutil –m to move the dump file to Cloud Storage.
4. Use the Cloud SQL for MySQL import option.
5. After the import operation is complete, re-point the applications to the Cloud SQL for MySQL instance.
Question 105
Your company is developing a global ecommerce website on Google Cloud. Your development team is working on a shopping cart service that is durable and elastically scalable with live traffic. Business disruptions from unplanned downtime are expected to be less than 5 minutes per month. In addition, the application needs to have very low latency writes. You need a data storage solution that has high write throughput and provides 99.99% uptime. What should you do?
A. Use Cloud SQL for data storage.
B. Use Cloud Spanner for data storage.
C. Use Memorystore for data storage.
D. Use Bigtable for data storage.
Question 106
Your organization has hundreds of Cloud SQL for MySQL instances. You want to follow Google-recommended practices to optimize platform costs. What should you do?
A. Use Query Insights to identify idle instances.
B. Remove inactive user accounts.
C. Run the Recommender API to identify overprovisioned instances.
D. Build indexes on heavily accessed tables.
Question 107
Your organization is running a critical production database on a virtual machine (VM) on Compute Engine. The VM has an ext4-formatted persistent disk for data files. The database will soon run out of storage space. You need to implement a solution that avoids downtime. What should you do?
A. In the Google Cloud Console, increase the size of the persistent disk, and use the resize2fs command to extend the disk.
B. In the Google Cloud Console, increase the size of the persistent disk, and use the fdisk command to verify that the new space is ready to use
C. In the Google Cloud Console, create a snapshot of the persistent disk, restore the snapshot to a new larger disk, unmount the old disk, mount the new disk, and restart the database service.
D. In the Google Cloud Console, create a new persistent disk attached to the VM, and configure the database service to move the files to the new disk.
Question 108
You want to migrate your on-premises PostgreSQL database to Compute Engine. You need to migrate this database with the minimum downtime possible. What should you do?
A. Perform a full backup of your on-premises PostgreSQL, and then, in the migration window, perform an incremental backup.
B. Create a read replica on Cloud SQL, and then promote it to a read/write standalone instance.
C. Use Database Migration Service to migrate your database.
D. Create a hot standby on Compute Engine, and use PgBouncer to switch over the connections.
Question 109
You have an application that sends banking events to Bigtable cluster-a in us-east. You decide to add cluster-b in us-central1. Cluster-a replicates data to cluster-b. You need to ensure that Bigtable continues to accept read and write requests if one of the clusters becomes unavailable and that requests are routed automatically to the other cluster. What deployment strategy should you use?
A. Use the default app profile with single-cluster routing.
B. Use the default app profile with multi-cluster routing.
C. Create a custom app profile with multi-cluster routing.
D. Create a custom app profile with single-cluster routing.
Question 110
Your organization works with sensitive data that requires you to manage your own encryption keys. You are working on a project that stores that data in a Cloud SQL database. You need to ensure that stored data is encrypted with your keys. What should you do?
A. Export data periodically to a Cloud Storage bucket protected by Customer-Supplied Encryption Keys.
B. Use Cloud SQL Auth proxy.
C. Connect to Cloud SQL using a connection that has SSL encryption.
D. Use customer-managed encryption keys with Cloud SQL.