Win IT Exam with Last Dumps 2025


Google Professional-Machine-Learning Exam

Page 12/34
Viewing Questions 111 120 out of 339 Questions
35.29%

Question 111
You need to build classification workflows over several structured datasets currently stored in BigQuery. Because you will be performing the classification several times, you want to complete the following steps without writing code: exploratory data analysis, feature selection, model building, training, and hyperparameter tuning and serving. What should you do?
A. Train a TensorFlow model on Vertex AI.
B. Train a classification Vertex AutoML model.
C. Run a logistic regression job on BigQuery ML.
D. Use scikit-learn in Notebooks with pandas library.

Question 112
You are an ML engineer in the contact center of a large enterprise. You need to build a sentiment analysis tool that predicts customer sentiment from recorded phone conversations. You need to identify the best approach to building a model while ensuring that the gender, age, and cultural differences of the customers who called the contact center do not impact any stage of the model development pipeline and results. What should you do?
A. Convert the speech to text and extract sentiments based on the sentences.
B. Convert the speech to text and build a model based on the words.
C. Extract sentiment directly from the voice recordings.
D. Convert the speech to text and extract sentiment using syntactical analysis.

Question 113
You need to analyze user activity data from your company’s mobile applications. Your team will use BigQuery for data analysis, transformation, and experimentation with ML algorithms. You need to ensure real-time ingestion of the user activity data into BigQuery. What should you do?
A. Configure Pub/Sub to stream the data into BigQuery.
B. Run an Apache Spark streaming job on Dataproc to ingest the data into BigQuery.
C. Run a Dataflow streaming job to ingest the data into BigQuery.
D. Configure Pub/Sub and a Dataflow streaming job to ingest the data into BigQuery,

Question 114
You work for a gaming company that manages a popular online multiplayer game where teams with 6 players play against each other in 5-minute battles. There are many new players every day. You need to build a model that automatically assigns available players to teams in real time. User research indicates that the game is more enjoyable when battles have players with similar skill levels. Which business metrics should you track to measure your model’s performance?
A. Average time players wait before being assigned to a team
B. Precision and recall of assigning players to teams based on their predicted versus actual ability
C. User engagement as measured by the number of battles played daily per user
D. Rate of return as measured by additional revenue generated minus the cost of developing a new model

Question 115
You are building an ML model to predict trends in the stock market based on a wide range of factors. While exploring the data, you notice that some features have a large range. You want to ensure that the features with the largest magnitude don’t overfit the model. What should you do?
A. Standardize the data by transforming it with a logarithmic function.
B. Apply a principal component analysis (PCA) to minimize the effect of any particular feature.
C. Use a binning strategy to replace the magnitude of each feature with the appropriate bin number.
D. Normalize the data by scaling it to have values between 0 and 1.


Question 116
You work for a biotech startup that is experimenting with deep learning ML models based on properties of biological organisms. Your team frequently works on early-stage experiments with new architectures of ML models, and writes custom TensorFlow ops in C++. You train your models on large datasets and large batch sizes. Your typical batch size has 1024 examples, and each example is about 1 MB in size. The average size of a network with all weights and embeddings is 20 GB. What hardware should you choose for your models?
A. A cluster with 2 n1-highcpu-64 machines, each with 8 NVIDIA Tesla V100 GPUs (128 GB GPU memory in total), and a n1-highcpu-64 machine with 64 vCPUs and 58 GB RAM
B. A cluster with 2 a2-megagpu-16g machines, each with 16 NVIDIA Tesla A100 GPUs (640 GB GPU memory in total), 96 vCPUs, and 1.4 TB RAM
C. A cluster with an n1-highcpu-64 machine with a v2-8 TPU and 64 GB RAM
D. A cluster with 4 n1-highcpu-96 machines, each with 96 vCPUs and 86 GB RAM

Question 117
You are an ML engineer at an ecommerce company and have been tasked with building a model that predicts how much inventory the logistics team should order each month. Which approach should you take?
A. Use a clustering algorithm to group popular items together. Give the list to the logistics team so they can increase inventory of the popular items.
B. Use a regression model to predict how much additional inventory should be purchased each month. Give the results to the logistics team at the beginning of the month so they can increase inventory by the amount predicted by the model.
C. Use a time series forecasting model to predict each item's monthly sales. Give the results to the logistics team so they can base inventory on the amount predicted by the model.
D. Use a classification model to classify inventory levels as UNDER_STOCKED, OVER_STOCKED, and CORRECTLY_STOCKEGive the report to the logistics team each month so they can fine-tune inventory levels.

Question 118
You are building a TensorFlow model for a financial institution that predicts the impact of consumer spending on inflation globally. Due to the size and nature of the data, your model is long-running across all types of hardware, and you have built frequent checkpointing into the training process. Your organization has asked you to minimize cost. What hardware should you choose?
A. A Vertex AI Workbench user-managed notebooks instance running on an n1-standard-16 with 4 NVIDIA P100 GPUs
B. A Vertex AI Workbench user-managed notebooks instance running on an n1-standard-16 with an NVIDIA P100 GPU
C. A Vertex AI Workbench user-managed notebooks instance running on an n1-standard-16 with a non-preemptible v3-8 TPU
D. A Vertex AI Workbench user-managed notebooks instance running on an n1-standard-16 with a preemptible v3-8 TPU

Question 119
You work for a company that provides an anti-spam service that flags and hides spam posts on social media platforms. Your company currently uses a list of 200,000 keywords to identify suspected spam posts. If a post contains more than a few of these keywords, the post is identified as spam. You want to start using machine learning to flag spam posts for human review. What is the main advantage of implementing machine learning for this business case?
A. Posts can be compared to the keyword list much more quickly.
B. New problematic phrases can be identified in spam posts.
C. A much longer keyword list can be used to flag spam posts.
D. Spam posts can be flagged using far fewer keywords.

Question 120
One of your models is trained using data provided by a third-party data broker. The data broker does not reliably notify you of formatting changes in the data. You want to make your model training pipeline more robust to issues like this. What should you do?
A. Use TensorFlow Data Validation to detect and flag schema anomalies.
B. Use TensorFlow Transform to create a preprocessing component that will normalize data to the expected distribution, and replace values that don’t match the schema with 0.
C. Use tf.math to analyze the data, compute summary statistics, and flag statistical anomalies.
D. Use custom TensorFlow functions at the start of your model training to detect and flag known formatting errors.



Premium Version