Win IT Exam with Last Dumps 2025


Google Professional-Machine-Learning Exam

Page 34/34
Viewing Questions 331 339 out of 339 Questions
100.00%

Question 331
You are creating a retraining policy for a customer churn prediction model deployed in Vertex AI. New training data is added weekly. You want to implement a model retraining process that minimizes cost and effort. What should you do?
A. Retrain the model when a significant shift in the distribution of customer attributes is detected in the production data compared to the training data.
B. Retrain the model when the model's latency increases by 10% due to increased traffic.
C. Retrain the model when the model accuracy drops by 10% on the new training dataset.
D. Retrain the model every week when new training data is available.

Question 332
You are an AI engineer with an apparel retail company. The sales team has observed seasonal sales patterns over the past 5-6 years. The sales team analyzes and visualizes the weekly sales data stored in CSV files. You have been asked to estimate weekly sales for future seasons to optimize inventory and personnel workloads. You want to use the most efficient approach. What should you do?
A. Upload the files into Cloud Storage. Use Python to preprocess and load the tabular data into BigQuery. Use time series forecasting models to predict weekly sales.
B. Upload the files into Cloud Storage. Use Python to preprocess and load the tabular data into BigQuery. Train a logistic regression model by using BigQuery ML to predict each product's weekly sales as one of three categories: high, medium, or low.
C. Load the files into BigQuery. Preprocess data by using BigQuery SQL. Connect BigQuery to Looker. Create a Looker dashboard that shows weekly sales trends in real time and can slice and dice the data based on relevant filters.
D. Create a custom conversational application using Vertex AI Agent Builder. Include code that enables file upload functionality, and upload the files. Use few-shot prompting and retrieval-augmented generation (RAG) to predict future sales trends by using the Gemini large language model (LLM).

Question 333
Your company's business stakeholders want to understand the factors driving customer churn to inform their business strategy. You need to build a customer churn prediction model that prioritizes simple interpretability of your model's results. You need to choose the ML framework and modeling technique that will explain which features led to the prediction. What should you do?
A. Build a TensorFlow deep neural network (DNN) model, and use SHAP values for feature importance analysis.
B. Build a PyTorch long short-term memory (LSTM) network, and use attention mechanisms for interpretability.
C. Build a logistic regression model in scikit-learn, and interpret the model's output coefficients to understand feature impact.
D. Build a linear regression model in scikit-learn, and interpret the model's standardized coefficients to understand feature impact.

Question 334
You are responsible for managing and monitoring a Vertex AI model that is deployed in production. You want to automatically retrain the model when its performance deteriorates. What should you do?
A. Create a Vertex AI Model Monitoring job to track the model's performance with production data, and trigger retraining when specific metrics drop below predefined thresholds.
B. Collect feedback from end users, and retrain the model based on their assessment of its performance.
C. Configure a scheduled job to evaluate the model's performance on a static dataset, and retrain the model if the performance drops below predefined thresholds.
D. Use Vertex Explainable AI to analyze feature attributions and identify potential biases in the model. Retrain when significant shifts in feature importance or biases are detected.

Question 335
You have recently developed a new ML model in a Jupyter notebook. You want to establish a reliable and repeatable model training process that tracks the versions and lineage of your model artifacts. You plan to retrain your model weekly. How should you operationalize your training process?
A. 1. Create an instance of the CustomTrainingJob class with the Vertex AI SDK to train your model.
2. Using the Notebooks API, create a scheduled execution to run the training code weekly.
B. 1. Create an instance of the CustomJob class with the Vertex AI SDK to train your model.
2. Use the Metadata API to register your model as a model artifact.
3. Using the Notebooks API, create a scheduled execution to run the training code weekly.
C. 1. Create a managed pipeline in Vertex AI Pipelines to train your model by using a Vertex AI CustomTrainingJobOp component.
2. Use the ModelUploadOp component to upload your model to Vertex AI Model Registry.
3. Use Cloud Scheduler and Cloud Run functions to run the Vertex AI pipeline weekly.
D. 1. Create a managed pipeline in Vertex AI Pipelines to train your model using a Vertex AI HyperParameterTuningJobRunOp component.
2. Use the ModelUploadOp component to upload your model to Vertex AI Model Registry.
3. Use Cloud Scheduler and Cloud Run functions to run the Vertex AI pipeline weekly.


Question 336
You have developed a custom ML model using Vertex AI and want to deploy it for online serving. You need to optimize the model's serving performance by ensuring that the model can handle high throughput while minimizing latency. You want to use the simplest solution. What should you do?
A. Deploy the model to a Vertex AI endpoint resource to automatically scale the serving backend based on the throughput. Configure the endpoint's autoscaling settings to minimize latency.
B. Implement a containerized serving solution using Cloud Run. Configure the concurrency settings to handle multiple requests simultaneously.
C. Apply simplification techniques such as model pruning and quantization to reduce the model's size and complexity. Retrain the model using Vertex AI to improve its performance, latency, memory, and throughput.
D. Enable request-response logging for the model hosted in Vertex AI. Use Looker Studio to analyze the logs, identify bottlenecks, and optimize the model accordingly.

Question 337
Your company needs to generate product summaries for vendors. You evaluate a foundation model from Model Garden for text summarization and find the style of the summaries are not aligned with your company's brand voice. How should you improve this LLM-based summarization model to better meet your business objectives?
A. Replace the pre-trained model with another model in Model Garden.
B. Fine-tune the model using a company-specific dataset.
C. Increase the model's temperature parameter.
D. Tune the token output limit in the response.

Question 338
You built a custom Vertex AI pipeline job that preprocesses images and trains an object detection model. The pipeline currently uses 1 n1-standard-8 machine with 1 NVIDIA Tesla V100 GPU. You want to reduce the model training time without compromising model accuracy. What should you do?
A. Reduce the number of layers in your object detection model.
B. Train the same model on a stratified subset of your dataset.
C. Update the WorkerPoolSpec to use a machine with 24 vCPUs and 1 NVIDIA Tesla V100 GPU.
D. Update the WorkerPoolSpec to use a machine with 24 vCPUs and 3 NVIDIA Tesla V100 GPUs.

Question 339
You are a SQL analyst. You need to utilize a TensorFlow customer segmentation model stored In Cloud Storage. You want to use the simplest and most efficient approach. What should you do?
A. Import the model into Vertex AI Model Registry. Deploy the model to a Vertex AI endpoint, and use SQL for inference in BigQuery.
B. Deploy the model by using TensorFlow Serving, and call for inference from BigQuery.
C. Convert the model into a BigQuery ML model, and use SQL for inference.
D. Import the model into BigQuery, and use SQL for inference.



Premium Version