Win IT Exam with Last Dumps 2025


Google Associate-Data-Practitioner Exam

Page 10/10
Viewing Questions 91 98 out of 98 Questions
100.00%

Question 91
You are designing an application that will interact with several BigQuery datasets. You need to grant the application's service account permissions that allow it to query and update tables within the datasets, and list all datasets in a project within your application. You want to follow the principle of least privilege. Which pre-defined IAM role(s) should you apply to the service account?
A. roles/bigquery.jobUser and roles/bigquery.dataOwner
B. roles/bigquery.connectionUser and roles/bigquery.dataViewer
C. roles/bigquery.admin
D. roles/bigquery.studioUser and roles/bigquery.filtereddataViewer

Question 92
Your company is setting up an enterprise business intelligence platform. You need to limit data access between many different teams while following the Google-recommended approach. What should you do first?
A. Create a separate Looker Studio report for each team, and share each report with the individual within each team.
B. Create one Looker Studio report with multiple pages, and add each team's data as a separate data source to the report.
C. Create a Looker (Google Cloud core) instance, and create a separate dashboard for each team.
D. Create a Looker (Google Cloud core) instance, and configure different Looker groups for each team.

Question 93
Your company is adopting BigQuery as their data warehouse platform. Your team has experienced Python developers. You need to recommend a fully-managed tool to build batch ETL processes that extract data from various source systems, transform the data using a variety of Google services, and load the transformed data into BigQuery. You want this tool to leverage your team's Python skills. What should you do?
A. Use Dataform with assertions.
B. Deploy Cloud Data Fusion and included plugins.
C. Use Cloud Composer with pre-built operators.
D. Use Dataflow and pre-built templates.

Question 94
You need to create a data pipeline for a new application. Your application will stream data that needs to be enriched and cleaned. Eventually, the data will be used to train machine learning models. You need to determine the appropriate data manipulation methodology and which Google Cloud services to use in this pipeline. What should you choose?
A. ETL; Dataflow -> BigQuery
B. ETL; Cloud Data Fusion -> Cloud Storage
C. ELT; Cloud Storage -> Bigtable
D. ELT; Cloud SQL-> Analytics Hub

Question 95
You are working with a small dataset in Cloud Storage that needs to be transformed and loaded into BigQuery for analysis. The transformation involves simple filtering and aggregation operations. You want to use the most efficient and cost-effective data manipulation approach. What should you do?
A. Use Dataproc to create an Apache Hadoop cluster, perform the ETL process using Apache Spark, and load the results into BigQuery.
B. Use BigQuery's SQL capabilities to load the data from Cloud Storage, transform if, and store the results in a new BigQuery table.
C. Create a Cloud Data Fusion instance and visually design an ETL pipeline that reads data from Cloud Storage, transforms it using built-transformations, and load the result into BigQuery.
D. Use Dataflow to perform the ETL process that reads the data from Cloud Storage, transforms it using Apache Beam, and writes the results to BigQuery.


Question 96
You want to build a model to predict the likelihood of a customer clicking on an online advertisement. You have historical data in BigQuery that includes features such as user demographic, ad placement, and previous click behavior. After training the model, you want to generate predictions on new data. Which model type should you use in BigQuery ML?
A. Linear regression
B. Matrix factorization
C. Logistic regression
D. K-means clustering

Question 97
Your data science team needs to collaboratively analyze a 25 TB BigQuery dataset to support the development of a machine learning model. You want to use Colab Enterprise notebooks while ensuring efficient data access and minimizing cost. What shout you do?
A. Export the BigQuery dataset to Google Drive. Load the dataset into the Colab Enterprise notebook using Pandas.
B. Use BigQuery magic commands within a Colab Enterprise notebook to query and analyze the data.
C. Create a Dataproc cluster connected to a Colab Enterprise notebook, and use Spark to process the data BigQuery.
D. Copy the BigQuery dataset to the local storage of the Colab Enterprise runtime, and analyze the data using Pandas.

Question 98
You manage an ecommerce website that has a diverse range of product. You need to forecast future product demand accurately to ensure that your company has sufficient inventory to meet customer needs and avoid stockouts. Your company's historical sales data is stored in BigQuery table. You need to create a scalable solution that takes into account the seasonality and historical data to predict product demand. What should you do?
A. Use the historical sales data to train and create a BigQueryML time series model. Use the ML.FORECAST function call to output the predictions into a new BigQuery table.
B. Use Colab Enterprise to create a Jupyter notebook. Use the historical sales data to train a custom prediction model in Python.
C. Use the historical sales data to train and create a BigQueryML linear regression model. Use the ML.PREDICT function call to output the predictions into a new BigQuery table.
D. Use the historical sales data to train and create a BigQueryML logistic regression model. Use the ML.PREDICT function call to output the predictions into a new BigQuery table.



Premium Version