Win IT Exam with Last Dumps 2025


Google Professional-Machine-Learning Exam

Page 30/34
Viewing Questions 291 300 out of 339 Questions
88.24%

Question 291
You work for a hospital. You received approval to collect the necessary patient data, and you trained a Vertex AI tabular AutoML model that calculates patients' risk score for hospital admission. You deployed the model. However, you're concerned that patient demographics might change over time and alter the feature interactions and impact prediction accuracy. You want to be alerted if feature interactions change, and you want to understand the importance of the features for the predictions. You want your alerting approach to minimize cost. What should you do?
A. Create a feature drift monitoring job. Set the sampling rate to 1 and the monitoring frequency to weekly.
B. Create a feature drift monitoring job. Set the sampling rate to 0.1 and the monitoring frequency to weekly.
C. Create a feature attribution drift monitoring job. Set the sampling rate to 1 and the monitoring frequency to weekly.
D. Create a feature attribution drift monitoring job. Set the sampling rate to 0.1 and the monitoring frequency to weekly.

Question 292
You are developing a TensorFlow Extended (TFX) pipeline with standard TFX components. The pipeline includes data preprocessing steps. After the pipeline is deployed to production, it will process up to 100 TB of data stored in BigQuery. You need the data preprocessing steps to scale efficiently, publish metrics and parameters to Vertex AI Experiments, and track artifacts by using Vertex ML Metadata. How should you configure the pipeline run?
A. Run the TFX pipeline in Vertex AI Pipelines. Configure the pipeline to use Vertex AI Training jobs with distributed processing.
B. Run the TFX pipeline in Vertex AI Pipelines. Set the appropriate Apache Beam parameters in the pipeline to run the data preprocessing steps in Dataflow.
C. Run the TFX pipeline in Dataproc by using the Apache Beam TFX orchestrator. Set the appropriate Vertex AI permissions in the job to publish metadata in Vertex AI.
D. Run the TFX pipeline in Dataflow by using the Apache Beam TFX orchestrator. Set the appropriate Vertex AI permissions in the job to publish metadata in Vertex AI.

Question 293
You are developing a batch process that will train a custom model and perform predictions. You need to be able to show lineage for both your model and the batch predictions. What should you do?
A. 1. Upload your dataset to BigQuery.
2. Use a Vertex AI custom training job to train your model.
3. Generate predictions by using Vertex AI SDK custom prediction routines.
B. 1. Use Vertex AI Experiments to evaluate model performance during training.
2. Register your model in Vertex AI Model Registry.
3. Generate batch predictions in Vertex AI.
C. 1. Create a Vertex AI managed dataset.
2. Use a Vertex AI training pipeline to train your model.
3. Generate batch predictions in Vertex AI.
D. 1. Use a Vertex AI Pipelines custom training job component to train your model.
2. Generate predictions by using a Vertex AI Pipelines model batch predict component.

Question 294
You work for a company that sells corporate electronic products to thousands of businesses worldwide. Your company stores historical customer data in BigQuery. You need to build a model that predicts customer lifetime value over the next three years. You want to use the simplest approach to build the model. What should you do?
A. Create a Vertex AI Workbench notebook. Use IPython magic to run the CREATE MODEL statement to create an ARIMA model.
B. Access BigQuery Studio in the Google Cloud console. Run the CREATE MODEL statement in the SQL editor to create an AutoML regression model.
C. Create a Vertex AI Workbench notebook. Use IPython magic to run the CREATE MODEL statement to create an AutoML regression model.
D. Access BigQuery Studio in the Google Cloud console. Run the CREATE MODEL statement in the SQL editor to create an ARIMA model.

Question 295
You work at a retail company, and are tasked with developing an ML model to predict product sales. Your company’s historical sales data is stored in BigQuery and includes features such as date, store location, product category, and promotion details. You need to choose the most effective combination of a BigQuery ML model and feature engineering to maximize prediction accuracy. What should you do?
A. Use a linear regression model. Perform one-hot encoding on categorical features, and create additional features based on the date, such as day of the week or month.
B. Use a boosted tree model. Perform label encoding on categorical features, and transform the date column into numeric values.
C. Use an autoencoder model. Perform label encoding on categorical features, and normalize the date column.
D. Use a matrix factorization model. Perform one-hot encoding on categorical features, and create interaction features between the store location and product category variables.


Question 296
Your organization’s employee onboarding team wants you to build an interactive self-help tool for new employees. The tool needs to receive queries from users and provide answers from the organization’s internal documentation. This documentation is spread across standalone documents such as PDF files. You want to build a solution quickly while minimizing maintenance overhead. What should you do?
A. Create a custom chatbot user interface hosted on App Engine. Use Vertex AI to fine-tune a Gemini model on the organization’s internal documentation. Send users’ queries to the fine-tuned model by using the custom chatbot and return the model’s responses to the users.
B. Deploy an internal website to a Google Kubernetes Engine (GKE) cluster. Build a search index by ingesting all of the organization’s internal documentation. Use Vertex AI Vector Search to implement a semantic search that retrieves results from the search index based on the query entered into the search box.
C. Use Vertex AI Agent Builder to create an agent. Securely index the organization’s internal documentation to the agent’s datastore. Send users’ queries to the agent and return the agent’s grounded responses to the users.
D. Deploy an internal website to a Google Kubernetes Engine (GKE) cluster. Organize the relevant internal documentation into sections. Collect user feedback on website content and store it in BigQuery. Request that the onboarding team regularly update the links based on user feedback.

Question 297
You work for an ecommerce company that wants to automatically classify products in images to improve user experience. You have a substantial dataset of labeled images depicting various unique products. You need to implement a solution for identifying custom products that is scalable, effective, and can be rapidly deployed. What should you do?
A. Develop a rule-based system to categorize the images.
B. Use a TensorFlow deep learning model that is trained on the image dataset.
C. Use a pre-trained object detection model from Model Garden.
D. Use AutoML Vision to train a model using the image dataset.

Question 298
Your team is developing a customer support chatbot for a healthcare company that processes sensitive patient information. You need to ensure that all personally identifiable information (PII) captured during customer conversations is protected prior to storing or analyzing the data. What should you do?
A. Use the Cloud Natural Language API to identify and redact PII in chatbot conversations.
B. Use the Cloud Natural Language API to classify and categorize all data, including PII, in chatbot conversations.
C. Use the DLP API to encrypt PII in chatbot conversations before storing the data.
D. Use the DLP API to scan and de-identify PII in chatbot conversations before storing the data.

Question 299
Your team is experimenting with developing smaller, distilled LLMs for a specific domain. You have performed batch inference on a dataset by using several variations of your distilled LLMs and stored the batch inference outputs in Cloud Storage. You need to create an evaluation workflow that integrates with your existing Vertex AI pipeline to assess the performance of the LLM versions while also tracking artifacts. What should you do?
A. Develop a custom Python component that reads the batch inference outputs from Cloud Storage, calculates evaluation metrics, and writes the results to a BigQuery table.
B. Use a Dataflow component that processes the batch inference outputs from Cloud Storage, calculates evaluation metrics in a distributed manner, and writes the results to a BigQuery table.
C. Create a custom Vertex AI Pipelines component that reads the batch inference outputs from Cloud Storage, calculates evaluation metrics, and writes the results to a BigQuery table.
D. Use the Automatic side-by-side (AutoSxS) pipeline component that processes the batch inference outputs from Cloud Storage, aggregates evaluation metrics, and writes the results to a BigQuery table.

Question 300
You work for a bank. You need to train a model by using unstructured data stored in Cloud Storage that predicts whether credit card transactions are fraudulent. The data needs to be converted to a structured format to facilitate analysis in BigQuery. Company policy requires that data containing personally identifiable information (PII) remain in Cloud Storage. You need to implement a scalable solution that preserves the data’s value for analysis. What should you do?
A. Use BigQuery’s authorized views and column-level access controls to restrict access to PII within the dataset.
B. Use the DLP API to de-identify the sensitive data before loading it into BigQuery.
C. Store the unstructured data in a separate PII-compliant BigQuery database.
D. Remove the sensitive data from the files manually before loading them into BigQuery.



Premium Version