Question 311
You lead a data science team that is working on a computationally intensive project involving running several experiments. Your team is geographically distributed and requires a platform that provides the most effective real-time collaboration and rapid experimentation. You plan to add GPUs to speed up your experimentation cycle, and you want to avoid having to manually set up the infrastructure. You want to use the Google-recommended approach. What should you do?
A. Configure a managed Dataproc cluster for large-scale data processing. Configure individual Jupyter notebooks on VMs that each team member uses for experimentation and model development.
B. Use Colab Enterprise with Cloud Storage for data management. Use a Git repository for version control.
C. Use Vertex AI Workbench and Cloud Storage for data management. Use a Git repository for version control.
D. Configure a distributed JupyterLab instance that each team member can access on a Compute Engine VM. Use a shared code repository for version control.
Question 312
You need to train a ControlNet model with Stable Diffusion XL for an image editing use case. You want to train this model as quickly as possible. Which hardware configuration should you choose to train your model?
A. Configure one a2-highgpu-1g instance with an NVIDIA A100 GPU with 80 GB of RAM. Use float32 precision during model training.
B. Configure one a2-highgpu-1g instance with an NVIDIA A100 GPU with 80 GB of RAM. Use bfloat16 quantization during model training.
C. Configure four n1-standard-16 instances, each with one NVIDIA Tesla T4 GPU with 16 GB of RAM. Use float32 precision during model training.
D. Configure four n1-standard-16 instances, each with one NVIDIA Tesla T4 GPU with 16 GB of RAM. Use floar16 quantization during model training.
Question 313
You are the lead ML engineer on a mission-critical project that involves analyzing massive datasets using Apache Spark. You need to establish a robust environment that allows your team to rapidly prototype Spark models using Jupyter notebooks. What is the fastest way to achieve this?
A. Set up a Vertex AI Workbench instance with a Spark kernel.
B. Use Colab Enterprise with a Spark kernel.
C. Set up a Dataproc cluster with Spark and use Jupyter notebooks.
D. Configure a Compute Engine instance with Spark and use Jupyter notebooks.
Question 314
You are training a large-scale deep learning model on a Cloud TPU. While monitoring the training progress through Tensorboard, you observe that the TPU utilization is consistently low and there are delays between the completion of one training step and the start of the next step. You want to improve TPU utilization and overall training performance. How should you address this issue?
A. Apply tf.data.Detaset.map with vectorized operations and parallelization.
B. Use tf.data.Detaset.interleave with multiple data sources.
C. Use tf.data.Detaset.cache on the dataset after the first epoch.
D. Implement tf.data.Detaset.prefetch in the data pipeline.
Question 315
You are building an ML pipeline to process and analyze both steaming and batch datasets. You need the pipeline to handle data validation, preprocessing, model training, and model deployment in a consistent and automated way. You want to design an efficient and scalable solution that captures model training metadata and is easily reproducible. You want to be able to reuse custom components for different parts of your pipeline. What should you do?
A. Use Cloud Composer for distributed processing of batch and streaming data in the pipeline.
B. Use Dataflow for distributed processing of batch and streaming data in the pipeline.
C. Use Cloud Build to build and push Docker images for each pipeline component.
D. Implement an orchestration framework such as Kubeflow Pipelines or Vertex AI Pipelines.
Question 316
You are developing an ML model on Vertex AI that needs to meet specific interpretability requirements for regulatory compliance. You want to use a combination of model architectures and modeling techniques to maximize accuracy and interpretability. How should you create the model?
A. Use a convolutional neural network (CNN)-based deep learning model architecture, and use local interpretable model-agnostic explanations (LIME) for interpretability.
B. Use a recurrent neural network (RNN)-based deep learning model architecture, and use integrated gradients for interpretability.
C. Use a boosted decision tree-based model architecture, and use SHAP values for interpretability.
D. Use a long short-term memory (LSTM)-based model architecture, and use local interpretable model-agnostic explanations (LIME) for interpretability.
Question 317
You have developed a fraud detection model for a large financial institution using Vertex AI. The model achieves high accuracy, but the stakeholders are concerned about the model's potential for bias based on customer demographics. You have been asked to provide insights into the model's decision-making process and identify any fairness issues. What should you do?
A. Create feature groups using Vertex AI Feature Store to segregate customer demographic features and non-demographic features. Retrain the model using only non-demographic features.
B. Use feature attribution in Vertex AI to analyze model predictions and the impact of each feature on the model's predictions.
C. Enable Vertex AI Model Monitoring to detect training-serving skew. Configure an alert to send an email when the skew or drift for a modes feature exceeds a predefined threshold. Re-train the model by appending new data to existing raining data.
D. Compile a dataset of unfair predictions. Use Vertex AI Vector Search to identify similar data points in the model's predictions. Report these data points to the stakeholders.
Question 318
You developed an ML model using Vertex AI and deployed it to a Vertex AI endpoint. You anticipate that the model will need to be retrained as new data becomes available. You have configured a Vertex AI Model Monitoring Job. You need to monitor the model for feature attribution drift and establish continuous evaluation metrics. What should you do?
A. Set up alerts using Cloud Logging, and use the Vertex AI console to review feature attributions.
B. Set up alerts using Cloud Logging, and use Looker Studio to create a dashboard that visualizes feature attribution drift. Review the dashboard periodically.
C. Enable request-response logging for the Vertex AI endpoint, and set up alerts using Pub/Sub. Create a Cloud Run function to run TensorFlow Data Validation on your dataset.
D. Enable request-response logging for the Vertex AI endpoint, and set up alerts using Cloud Logging. Review the feature attributions in the Google Cloud console when an alert is received.
Question 319
You work as an ML researcher at an investment bank, and you are experimenting with the Gemma large language model (LLM). You plan to deploy the model for an internal use case. You need to have full control of the mode's underlying infrastructure and minimize the model's inference time. Which serving configuration should you use for this task?
A. Deploy the model on a Vertex AI endpoint manually by creating a custom inference container.
B. Deploy the model on a Google Kubernetes Engine (GKE) cluster by using the deployment options in Model Garden.
C. Deploy the model on a Vertex AI endpoint by using one-click deployment in Model Garden.
D. Deploy the model on a Google Kubernetes Engine (GKE) cluster manually by cresting a custom yaml manifest.
Question 320
You are an ML researcher and are evaluating multiple deep learning-based model architectures and hyperparameter configurations. You need to implement a robust solution to track the progress of each model iteration, visualize key metrics, gain insights into model internals, and optimize training performance.
You want your solution to have the most efficient and powerful approach to compare the models and have the strongest visualization abilities. How should you bull this solution?
A. Use Vertex AI TensorBoard for in-depth visualization and analysis, and use BigQuery for experiment tracking and analysis.
B. Use Vertex AI TensorBoard for visualizing training progress and model behavior, and use Vertex AI Feature Store to stove and manage experiment data for analysis and reproducibility.
C. Use Vertex AI Experiments for tracking iterations and comparison, and use Vertex AI TensorBoard for visualization and analysis of the training metrics and model architecture.
D. Use Vertex AI Experiments for tracking iterations and comparison, and use BigQuery and Looker Studio for visualization and analysis of the training metrics and model architecture.