Question 301
You are an ML engineer at a bank. You need to build a solution that provides transparent and understandable explanations for AI-driven decisions for loan approvals, credit limits, and interest rates. You want to build this system to require minimal operational overhead. What should you do?
A. Deploy the Learning Interpretability Tool (LIT) on App Engine to provide explainability and visualization of the output.
B. Use Vertex Explainable AI to generate feature attributions, and use feature-based explanations for your models.
C. Use AutoML Tables with built-in explainability features, and use Shapley values for explainability.
D. Deploy pre-trained models from TensorFlow Hub to provide explainability using visualization tools.
Question 302
You are building an application that extracts information from invoices and receipts. You want to implement this application with minimal custom code and training. What should you do?
A. Use the Cloud Vision API with TEXT_DETECTION type to extract text from the invoices and receipts, and use a pre-built natural language processing (NLP) model to parse the extracted text.
B. Use the Cloud Document AI API to extract information from the invoices and receipts.
C. Use Vertex AI Agent Builder with the pre-built Layout Parser model to extract information from the invoices and receipts.
D. Train an AutoML Natural Language model to classify and extract information from the invoices and receipts.
Question 303
You work for a media company that operates a streaming movie platform where users can search for movies in a database. The existing search algorithm uses keyword matching to return results. Recently, you have observed an increase in searches using complex semantic queries that include the movies’ metadata such as the actor, genre, and director.
You need to build a revamped search solution that will provide better results, and you need to build this proof of concept as quickly as possible. How should you build the search platform?
A. Use a foundational large language model (LLM) from Model Garden as the search platform’s backend.
B. Configure Vertex AI Vector Search as the search platform’s backend.
C. Use a BERT-based model and host it on a Vertex AI endpoint.
D. Create the search platform through Vertex AI Agent Builder.
Question 304
You are an AI engineer that works for a popular video streaming platform. You built a classification model using PyTorch to predict customer churn. Each week, the customer retention team plans to contact customers that have been identified as at risk of churning with personalized offers. You want to deploy the model while minimizing maintenance effort. What should you do?
A. Use Vertex AI’s prebuilt containers for prediction. Deploy the container on Cloud Run to generate online predictions.
B. Use Vertex AI’s prebuilt containers for prediction. Deploy the model on Google Kubernetes Engine (GKE), and configure the model for batch prediction.
C. Deploy the model to a Vertex AI endpoint, and configure the model for batch prediction. Schedule the batch prediction to run weekly.
D. Deploy the model to a Vertex AI endpoint, and configure the model for online prediction. Schedule a job to query this endpoint weekly.
Question 305
Your company recently migrated several of is ML models to Google Cloud. You have started developing models in Vertex AI. You need to implement a system that tracks model artifacts and model lineage. You want to create a simple, effective solution that can also be reused for future models. What should you do?
A. Use a combination of Vertex AI Pipelines and the Vertex AI SDK to integrate metadata tracking into the ML workflow.
B. Use Vertex AI Pipelines for model artifacts and MLflow for model lineage.
C. Use Vertex AI Experiments for model artifacts and use Vertex ML Metadata for model lineage.
D. Implement a scheduled metadata tracking solution using Cloud Composer and Cloud Run functions.
Question 306
You work for a large retailer, and you need to build a model to predict customer chum. The company has a dataset of historical customer data, including customer demographics purchase history, and website activity. You need to create the model in BigQuery ML and thoroughly evaluate its performance. What should you do?
A. Create a linear regression model in BigQuery ML, and register the model in Vertex AI Model Registry. Use Vertex AI to evaluate the model performance.
B. Create a logistic regression model in BigQuery ML, and register the model in Vertex AI Model Registry. Use ML.ARIMA_EVALUATE function to evaluate the model performance.
C. Create a linear regression model in BigQuery ML. Use the ML.EVALUATE function to evaluate the model performance.
D. Create a logistic regression model in BigQuery ML. Use the ML.CONFUSION_MATRIX function to evaluate the model performance.
Question 307
You are an AI architect at a popular photo sharing social media platform. Your organization's content moderation team currently scans images uploaded by users and removes explicit images manually. You want to implement an AI service to automatically prevent users from uploading explicit images. What should you do?
A. Train an image clustering model by using TensorFlow in a Vertex AI Workbench instance. Deploy this model to a Vertex AI endpoint and configure it for online inference. Run this model each time a new image is uploaded to identify and block inappropriate uploads.
B. Develop a custom TensorFlow model in a Vertex AI Workbench instance. Train the model on a dataset of manually labeled images. Deploy the model to a Vertex AI endpoint. Run periodic batch inference to identify inappropriate uploads and report them to the content moderation team.
C. Create a dataset using manually labeled images. Ingest this dataset into AutoML. Train an image classification model and deploy into a Vertex AI endpoint. Integrate this endpoint with the image upload process to identify and block inappropriate uploads. Monitor predictions and periodically retrain the model.
D. Send a copy of every user-uploaded image to a Cloud Storage bucket. Configure a Cloud Run function that triggers the Cloud Vision API to detect explicit content each time a new image is uploaded. Report the classifications to the content moderation team for review.
Question 308
You are an ML engineer at a bank. The bank's leadership team wants to reduce the number of loan defaults. The bank has labeled historic data about loan defaults stored in BigQuery. You have been asked to use AI to support the loan application process. For compliance reasons, you need to provide explanations for loan rejections. What should you do?
A. Import the historic loan default data into AutoML. Train and deploy a linear regression model to predict default probability. Report the probability of default for each loan application.
B. Create a custom application that uses the Gemini large language model (LLM). Provide the historic data as context to the model, and prompt the model to predict customer defaults. Report the prediction and explanation provided by the LLM for each loan application.
C. Train and deploy a BigQuery ML classification model trained on historic loan default data. Enable feature-based explanations for each prediction. Report the prediction, probability of default, and feature attributions for each loan application.
D. Load the historic loan default data into a Vertex AI Workbench instance. Train a deep learning classification model using TensorFlow to predict loan default. Run inference for each loan application, and report the predictions.
Question 309
You are developing a natural language processing model that analyzes customer feedback to identify positive, negative, and neutral experiences. During the testing phase, you notice that the model demonstrates a significant bias against certain demographic groups, leading to skewed analysis results. You want to address this issue following Google's responsible AI practices. What should you do?
A. Use Vertex AI's model evaluation lo assess bias in the model's predictions, and use post-processing to adjust outputs for identified demographic discrepancies.
B. Implement a more complex model architecture that can capture nuanced patterns in language to reduce bias.
C. Audit the training dataset to identify underrepresented groups and augment the dataset with additional samples before retraining the model.
D. Use Vertex Explainable AI to generate explanations and systematically adjust the predictions to address identified biases.
Question 310
You recently deployed an image classification model on Google Cloud. You used Cloud Build to build a CI/CD pipeline for the model. You need to ensure that the model stays up-to-date with data and code changes by using an efficient retraining process. What should you do?
A. Use Cloud Run functions to monitor data drift in real time and trigger a Vertex AI Training job to retrain the model when data drift exceeds a predetermined threshold.
B. Configure a Git repository trigger in Cloud Build to initiate retraining when there are new code commits to the model's repository and a Pub/Sub trigger when there is new data in Cloud Storage.
C. Use Cloud Scheduler to initiate a daily retraining job in Vertex AI Pipelines.
D. Configure Cloud Composer to orchestrate a weekly retraining job that includes data extraction from BigQuery, model retraining with Vertex AI Training, and model deployment to a Vertex AI endpoint.