Win IT Exam with Last Dumps 2025


Google Professional-Machine-Learning Exam

Page 33/34
Viewing Questions 321 330 out of 339 Questions
97.06%

Question 321
You are developing a model to detect fraudulent credit card transactions. You need to prioritize detection, because missing even one fraudulent transaction could severely impact the credit card holder. You used AutoML to train a model on users' profile information and credit card transaction data. After training the initial model, you notice that the model is failing to detect many fraudulent transactions. How should you increase the number of fraudulent transactions that are detected?
A. Add more non-fraudulent examples to the training set.
B. Reduce the maximum number of node hours for training.
C. Increase the probability threshold to classify a fraudulent transaction.
D. Decrease the probability threshold to classify a fraudulent transaction.

Question 322
You work at an organization that maintains a cloud-based communication platform that integrates conventional chat, voice, and video conferencing into one platform. The audio recordings are stored in Cloud Storage. All recordings have a 16 kHz sample rate and are more than one minute long. You need to implement a new feature in the platform that will automatically transcribe voice call recordings into text for future applications, such as call summarization and sentiment analysis. How should you implement the voice call transcription feature while following Google-recommended practices?
A. Use the original audio sampling rate, and transcribe the audio by using the Speech-to-Text API with synchronous recognition.
B. Use the original audio sampling rate, and transcribe the audio by using the Speech-to-Text API with asynchronous recognition.
C. Downsample the audio recordings to 8 kHz, and transcribe the audio by using the Speech-to-Text API with synchronous recognition.
D. Downsample the audio recordings to 8 kHz, and transcribe the audio by using the Speech-to-Text API with asynchronous recognition.

Question 323
You have created multiple versions of an ML model and have imported them to Vertex AI Model Registry. You want to perform A/B testing to identify the best performing model using the simplest approach. What should you do?
A. Split incoming traffic to distribute prediction requests among the versions. Monitor the performance of each version using Vertex AI's built-in monitoring tools.
B. Split incoming traffic among Google Kubernetes Engine (GKE) clusters, and use Traffic Director to distribute prediction requests to different versions. Monitor the performance of each version using Cloud Monitoring.
C. Split incoming traffic to distribute prediction requests among the versions. Monitor the performance of each version using Looker Studio dashboards that compare logged data for each version.
D. Split incoming traffic among separate Cloud Run instances of deployed models. Monitor the performance of each version using Cloud Monitoring.

Question 324
You need to train an XGBoost model on a small dataset. Your training code requires custom dependencies. You need to set up a Vertex AI custom training job. You want to minimize the startup time of the training job while following Google-recommended practices. What should you do?
A. Create a custom container that includes the data and the custom dependencies. In your training application, load the data into a pandas DataFrame and train the model.
B. Store the data in a Cloud Storage bucket, and use the XGBoost prebuilt custom container to run your training application. Create a Python source distribution that installs the custom dependencies at runtime. In your training application, read the data from Cloud Storage and train the model.
C. Use the XGBoost prebuilt custom container. Create a Python source distribution that includes the data and installs the custom dependencies at runtime. In your training application, load the data into a pandas DataFrame and train the model.
D. Store the data in a Cloud Storage bucket, and create a custom container with your training application and its custom dependencies. In your training application, read the data from Cloud Storage and train the model.

Question 325
You are building an ML model to predict customer churn for a subscription service. You have trained your model on Vertex AI using historical data, and deployed it to a Vertex AI endpoint for real-time predictions. After a few weeks, you notice that the model's performance, measured by AUC (area under the ROC curve), has dropped significantly in production compared to its performance during training. How should you troubleshoot this problem?
A. Monitor the training/serving skew of feature values for requests sent to the endpoint.
B. Monitor the resource utilization of the endpoint, such as CPU and memory usage, to identify potential bottlenecks in performance.
C. Enable Vertex Explainable AI feature attribution to analyze model predictions and understand the impact of each feature on the model's predictions.
D. Monitor the latency of the endpoint to determine whether predictions are being served within the expected time frame.


Question 326
You work at an organization that manages a popular payment app. You built a fraudulent transaction detection model by using scikit-learn and deployed it to a Vertex AI endpoint. The endpoint is currently using 1 e2-standard-2 machine with 2 vCPUs and 8 GB of memory. You discover that traffic on the gateway fluctuates to four times more than the endpoint's capacity. You need to address this issue by using the most cost-effective approach. What should you do?
A. Re-deploy the model with a TPU accelerator.
B. Change the machine type to e2-highcpu-32 with 32 vCPUs and 32 GB of memory.
C. Set up a monitoring job and an alert for CPU usage. If you receive an alert, scale the vCPUs as needed.
D. Increase the number of maximum replicas to 6 nodes, each with 1 e2-standard-2 machine.

Question 327
You are developing an AI text generator that will be able to dynamically adapt its generated responses to mirror the writing style of the user and mimic famous authors if their style is detected. You have a large dataset of various authors' works, and you plan to host the model on a custom VM. You want to use the most effective model. What should you do?
A. Deploy Llama 3 from Model Garden, and use prompt engineering techniques.
B. Fine-tune a BERT-based model from TensorFlow Hub.
C. Fine-tune Llama 3 from Model Garden on Vertex AI Pipelines.
D. Use the Gemini 1.5 Flash foundational model to build the text generator.

Question 328
You are a lead ML architect at a small company that is migrating from on-premises to Google Cloud. Your company has limited resources and expertise in cloud infrastructure. You want to serve your models from Google Cloud as soon as possible. You want to use a scalable, reliable, and cost-effective solution that requires no additional resources. What should you do?
A. Configure Compute Engine VMs to host your models.
B. Create a Cloud Run function to deploy your models as serverless functions.
C. Create a managed cluster on Google Kubernetes Engine (GKE), and deploy your models as containers.
D. Deploy your models on Vertex AI endpoints.

Question 329
You deployed a conversational application that uses a large language model (LLM). The application has 1,000 users. You collect user feedback about the verbosity and accuracy of the model 's responses. The user feedback indicates that the responses are factually correct but users want different levels of verbosity depending on the type of question. You want the model to return responses that are more consistent with users' expectations, and you want to use a scalable solution. What should you do?
A. Implement a keyword-based routing layer. If the user's input contains the words "detailed" or "description," return a verbose response. If the user's input contains the word "fact." re-prompt the language model to summarize the response and return a concise response.
B. Ask users to provide examples of responses with the appropriate verbosity as a list of question and answer pairs. Use this dataset to perform supervised fine tuning of the foundational model. Re-evaluate the verbosity of responses with the tuned model.
C. Ask users to indicate all scenarios where they expect concise responses versus verbose responses. Modify the application 's prompt to include these scenarios and their respective verbosity levels. Re-evaluate the verbosity of responses with updated prompts.
D. Experiment with other proprietary and open-source LLMs. Perform A/B testing by setting each model as your application's default model. Choose a model based on the results.

Question 330
You are using Vertex AI to manage your ML models and datasets. You recently updated one of your models. You want to track and compare the new version with the previous one and incorporate dataset versioning. What should you do?
A. Use Vertex AI TensorBoard to visualize the training metrics of the new model version, and use Data Catalog to manage dataset versioning.
B. Use Vertex AI Model Monitoring to monitor the performance of the new model version, and use Vertex AI Training to manage dataset versioning.
C. Use Vertex AI Experiments to track and compare model artifacts and versions, and use Vertex ML Metadata to manage dataset versioning.
D. Use Vertex AI Experiments to track and compare model artifacts and versions, and use Vertex AI managed datasets to manage dataset versioning.



Premium Version