Question 131
Your company's mission-critical, globally available application is supported by a Cloud Spanner database. Experienced users of the application have read and write access to the database, but new users are assigned read-only access to the database. You need to assign the appropriate Cloud Spanner Identity and Access Management (IAM) role to new users being onboarded soon. What roles should you set up?
A. roles/spanner.databaseReader
B. roles/spanner.databaseUser
C. roles/spanner.viewer
D. roles/spanner.backupWriter
Question 132
Your company is migrating from an on-premises database to a single-region Spanner instance. The current database supports 40,000 reads each second, and 7,000 writes each second, at 1 KB row sizes at peak. You need to determine the most cost-effective size for the Spanner instance to handle the equivalent current workload. What should you do?
A. Select a 4-node Spanner instance.
B. Select a 6-node Spanner instance.
C. Select a 1-node Spanner instance.
D. Recommend a multi-region Spanner instance.
Question 133
You are building a data warehouse on BigQuery. Sources of data include several MySQL databases located on-premises. You need to transfer data from these databases into BigQuery for analytics. You want to use a managed solution that has low latency and is easy to set up. What should you do?
A. Use Datastream to connect to your on-premises database and create a stream. Have Datastream write to Cloud Storage. Then use Dataflow to process the data into BigQuery.
B. Use Cloud Data Fusion and scheduled workflows to extract data from MySQL. Transform this data into the appropriate schema, and load this data into your BigQuery database.
C. Use Database Migration Service to replicate data to a Cloud SQL for MySQL instance. Create federated tables in BigQuery on top of the replicated instances to transform and load the data into your BigQuery database.
D. Create extracts from your on-premises databases periodically, and push these extracts to Cloud Storage. Upload the changes into BigQuery, and merge them with existing tables.
Question 134
You want to migrate an existing on-premises application to Google Cloud. Your application supports semi-structured data ingested from 100,000 sensors, and each sensor sends 10 readings per second from manufacturing plants. You need to make this data available for real-time monitoring and analysis. What should you do?
A. Deploy the database using Cloud SQL.
B. Use BigQuery, and load data in batches.
C. Deploy the database using Bigtable.
D. Deploy the database using Cloud Spanner.
Question 135
Your company is launching a gaming application that uses a Firestore database. You need to identify an easy-to-manage and cost-effective solution to automate the scheduling of Firestore data exports. What should you do?
A. Create a new Compute Engine, and set up a cron Job to run the gcloud firestore export command.
B. Use Dataflow to create a custom pipeline to extract data from Firestore, transform it into the desired format, and load it into a Cloud Storage bucket at regular intervals.
C. Use Cloud Scheduler to trigger a Cloud Function that executes the Firestore export process.
D. Use the Firebase Admin SDK to programmatically schedule and manage exports.
Question 136
Your company is launching a new globally distributed application with strict requirements for low latency, strong consistency, zero downtime, and high availability (HA). You need to configure a scalable database solution to support anticipated rapid growth and optimal application performance. What should you do?
A. Create a Spanner instance across regions for optimal performance.
B. Implement Bigtable with replication across multiple regions and configure to prioritize data accuracy.
C. Create a Cloud SQL instance in HA mode with a cross-region read replica.
D. Create an AlloyDB instance in HA mode with a cross-region read replica.
Question 137
Your rapidly growing ecommerce company is migrating their analytics workloads to AlloyDB for PostgreSQL. You anticipate a significant increase in reporting queries as the business scales. You need a read pool strategy to scale your analytics operations in anticipation of future growth while minimizing costs. What should you do?
A. Direct all complex, long-running analytics queries to the primary instance, and only use read pools for short, frequent reports.
B. Change the instance sizes of the read nodes in the read pool.
C. Begin with minimal read pools and iteratively expand or shrink them based on real-time load monitoring to optimize resource allocation.
D. Assign all reporting queries to a single, large read pool to maximize the combined compute resources available for analytics.
Question 138
You are deploying a Cloud SQL for MySQL database to serve a non-critical application. The database size is 10 GB and will be updated every night with data stored in a Cloud Storage bucket. The database serves read-only traffic from the application during the day. The data locality requirement of this application mandates that data must reside in a single region. You want to minimize the cost of running this database while maintaining an RTO of 1 day. What should you do?
A. Create a Cloud SQL for MySQL instance with high availability (HA) enabled. Configure automated backups of the Cloud SQL instance, and use the default backup location.
B. Create a Cloud SQL for MySQL instance with high availability (HA) disabled. Create a read replica in the same zone.
C. Create a Cloud SQL for MySQL instance with high availability (HA) disabled. Create a read replica in a second region.
D. Create a Cloud SQL for MySQL instance with high availability (HA) disabled. Configure automated backups of the Cloud SQL instance, and use a custom backup location to store backups in a Cloud Storage bucket in the same region.
Question 139
You are planning to migrate a 10 TB relational database from an on-premises environment to Cloud SQL for PostgreSQL. The database contains sensitive customer information. You want to follow Google-recommended practices to keep data secure during the migration. What should you do? (Choose two.)
A. Configure Cloud SQL for automatic patching, and enable binary logging.
B. Establish a Private Service Connect connection between your on-premises environment and the Cloud SQL instance.
C. Use an external IP address for the Cloud SQL instance, and configure firewall rules.
D. Set up Identity and Access Management (IAM) roles to restrict access with Cloud SQL with an internal IP address.
E. Leverage Storage Transfer Service with client-side encryption.
Question 140
You are running a Cloud SQL for PostgreSQL 13 Enterprise Edition instance. During an audit, you discovered that the write-ahead logs used for point-in-time recovery (PITR) are stored on disk. You need to store PITR logs in a Cloud Storage bucket going forward. How should you do this without compromising recoverability or losing the current PITR logs?
A. Clone the instance. Create a new instance with PITR retention set to 30 days.
B. Change the transaction logs (WAL) retention period.
C. Upgrade to Enterprise Plus Edition.
D. Disable PITR, then enable PITR.