Question 211
You need to develop a custom TensorFlow model that will be used for online predictions. The training data is stored in BigQuery You need to apply instance-level data transformations to the data for model training and serving. You want to use the same preprocessing routine during model training and serving. How should you configure the preprocessing routine?
A. Create a BigQuery script to preprocess the data, and write the result to another BigQuery table.
B. Create a pipeline in Vertex AI Pipelines to read the data from BigQuery and preprocess it using a custom preprocessing component.
C. Create a preprocessing function that reads and transforms the data from BigQuery. Create a Vertex AI custom prediction routine that calls the preprocessing function at serving time.
D. Create an Apache Beam pipeline to read the data from BigQuery and preprocess it by using TensorFlow Transform and Dataflow.
Question 212
You are pre-training a large language model on Google Cloud. This model includes custom TensorFlow operations in the training loop. Model training will use a large batch size, and you expect training to take several weeks. You need to configure a training architecture that minimizes both training time and compute costs. What should you do?
A. Implement 8 workers of a2-megagpu-16g machines by using tf.distribute.MultiWorkerMirroredStrategy.
B. Implement a TPU Pod slice with -accelerator-type=v4-l28 by using tf.distribute.TPUStrategy.
C. Implement 16 workers of c2d-highcpu-32 machines by using tf.distribute.MirroredStrategy.
D. Implement 16 workers of a2-highgpu-8g machines by using tf.distribute.MultiWorkerMirroredStrategy.
Question 213
You are building a TensorFlow text-to-image generative model by using a dataset that contains billions of images with their respective captions. You want to create a low maintenance, automated workflow that reads the data from a Cloud Storage bucket collects statistics, splits the dataset into training/validation/test datasets performs data transformations trains the model using the training/validation datasets, and validates the model by using the test dataset. What should you do?
A. Use the Apache Airflow SDK to create multiple operators that use Dataflow and Vertex AI services. Deploy the workflow on Cloud Composer.
B. Use the MLFlow SDK and deploy it on a Google Kubernetes Engine cluster. Create multiple components that use Dataflow and Vertex AI services.
C. Use the Kubeflow Pipelines (KFP) SDK to create multiple components that use Dataflow and Vertex AI services. Deploy the workflow on Vertex AI Pipelines.
D. Use the TensorFlow Extended (TFX) SDK to create multiple components that use Dataflow and Vertex AI services. Deploy the workflow on Vertex AI Pipelines.
Question 214
You are developing an ML pipeline using Vertex AI Pipelines. You want your pipeline to upload a new version of the XGBoost model to Vertex AI Model Registry and deploy it to Vertex AI Endpoints for online inference. You want to use the simplest approach. What should you do?
A. Use the Vertex AI REST API within a custom component based on a vertex-ai/prediction/xgboost-cpu image
B. Use the Vertex AI ModelEvaluationOp component to evaluate the model
C. Use the Vertex AI SDK for Python within a custom component based on a python:3.10 image
D. Chain the Vertex AI ModelUploadOp and ModelDeployOp components together
Question 215
You work for an online retailer. Your company has a few thousand short lifecycle products. Your company has five years of sales data stored in BigQuery. You have been asked to build a model that will make monthly sales predictions for each product. You want to use a solution that can be implemented quickly with minimal effort. What should you do?
A. Use Prophet on Vertex AI Training to build a custom model.
B. Use Vertex AI Forecast to build a NN-based model.
C. Use BigQuery ML to build a statistical ARIMA_PLUS model.
D. Use TensorFlow on Vertex AI Training to build a custom model.
Question 216
You are creating a model training pipeline to predict sentiment scores from text-based product reviews. You want to have control over how the model parameters are tuned, and you will deploy the model to an endpoint after it has been trained. You will use Vertex AI Pipelines to run the pipeline. You need to decide which Google Cloud pipeline components to use. What components should you choose?
A. TabularDatasetCreateOp, CustomTrainingJobOp, and EndpointCreateOp
B. TextDatasetCreateOp, AutoMLTextTrainingOp, and EndpointCreateOp
C. TabularDatasetCreateOp. AutoMLTextTrainingOp, and ModelDeployOp
D. TextDatasetCreateOp, CustomTrainingJobOp, and ModelDeployOp
Question 217
Your team frequently creates new ML models and runs experiments. Your team pushes code to a single repository hosted on Cloud Source Repositories. You want to create a continuous integration pipeline that automatically retrains the models whenever there is any modification of the code. What should be your first step to set up the CI pipeline?
A. Configure a Cloud Build trigger with the event set as "Pull Request"
B. Configure a Cloud Build trigger with the event set as "Push to a branch"
C. Configure a Cloud Function that builds the repository each time there is a code change
D. Configure a Cloud Function that builds the repository each time a new branch is created
Question 218
You have built a custom model that performs several memory-intensive preprocessing tasks before it makes a prediction. You deployed the model to a Vertex AI endpoint, and validated that results were received in a reasonable amount of time. After routing user traffic to the endpoint, you discover that the endpoint does not autoscale as expected when receiving multiple requests. What should you do?
A. Use a machine type with more memory
B. Decrease the number of workers per machine
C. Increase the CPU utilization target in the autoscaling configurations.
D. Decrease the CPU utilization target in the autoscaling configurations
Question 219
Your company manages an ecommerce website. You developed an ML model that recommends additional products to users in near real time based on items currently in the user’s cart. The workflow will include the following processes:
1. The website will send a Pub/Sub message with the relevant data and then receive a message with the prediction from Pub/Sub
2. Predictions will be stored in BigQuery
3. The model will be stored in a Cloud Storage bucket and will be updated frequently
You want to minimize prediction latency and the effort required to update the model. How should you reconfigure the architecture?
A. Write a Cloud Function that loads the model into memory for prediction. Configure the function to be triggered when messages are sent to Pub/Sub.
B. Create a pipeline in Vertex AI Pipelines that performs preprocessing, prediction, and postprocessing. Configure the pipeline to be triggered by a Cloud Function when messages are sent to Pub/Sub.
C. Expose the model as a Vertex AI endpoint. Write a custom DoFn in a Dataflow job that calls the endpoint for prediction.
D. Use the RunInference API with WatchFilePattern in a Dataflow job that wraps around the model and serves predictions.
Question 220
You are collaborating on a model prototype with your team. You need to create a Vertex AI Workbench environment for the members of your team and also limit access to other employees in your project. What should you do?
A. 1. Create a new service account and grant it the Notebook Viewer role
2. Grant the Service Account User role to each team member on the service account
3. Grant the Vertex AI User role to each team member
4. Provision a Vertex AI Workbench user-managed notebook instance that uses the new service account
B. 1. Grant the Vertex AI User role to the default Compute Engine service account
2. Grant the Service Account User role to each team member on the default Compute Engine service account
3. Provision a Vertex AI Workbench user-managed notebook instance that uses the default Compute Engine service account.
C. 1. Create a new service account and grant it the Vertex AI User role
2. Grant the Service Account User role to each team member on the service account
3. Grant the Notebook Viewer role to each team member.
4. Provision a Vertex AI Workbench user-managed notebook instance that uses the new service account
D. 1. Grant the Vertex AI User role to the primary team member
2. Grant the Notebook Viewer role to the other team members
3. Provision a Vertex AI Workbench user-managed notebook instance that uses the primary user’s account