HOTSPOT - You need to design a storage solution for an app that will store large amounts of frequently used data. The solution must meet the following requirements: - Maximize data throughput. - Prevent the modification of data for one year. - Minimize latency for read and write operations. Which Azure Storage account type and storage service should you recommend? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:
Box 1: BlockBlobStorage - Block Blob is a premium storage account type for block blobs and append blobs. Recommended for scenarios with high transactions rates, or scenarios that use smaller objects or require consistently low storage latency. Box 2: Blob - The Archive tier is an offline tier for storing blob data that is rarely accessed. The Archive tier offers the lowest storage costs, but higher data retrieval costs and latency compared to the online tiers (Hot and Cool). Data must remain in the Archive tier for at least 180 days or be subject to an early deletion charge. Reference: https://docs.microsoft.com/en-us/azure/storage/blobs/archive-blob
Question 72
HOTSPOT - You have an Azure subscription that contains the storage accounts shown in the following table.
You plan to implement two new apps that have the requirements shown in the following table.
Which storage accounts should you recommend using for each app? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:
Box 1: Storage1 and storage3 only Need to use Standard accounts. Data stored in a premium block blob storage account cannot be tiered to hot, cool, or archive using Set Blob Tier or using Azure Blob Storage lifecycle management Box 2: Storage1 and storage4 only Azure File shares requires Premium accounts. Only Storage1 and storage4 are premium. Reference: https://docs.microsoft.com/en-us/azure/storage/blobs/access-tiers-overview#feature-support https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-create-file-share?tabs=azure-portal#basics
Question 73
You are designing an application that will be hosted in Azure. The application will host video files that range from 50 MB to 12 GB. The application will use certificate-based authentication and will be available to users on the internet. You need to recommend a storage option for the video files. The solution must provide the fastest read performance and must minimize storage costs. What should you recommend?
A. Azure Files
B. Azure Data Lake Storage Gen2
C. Azure Blob Storage
D. Azure SQL Database
Blob Storage: Stores large amounts of unstructured data, such as text or binary data, that can be accessed from anywhere in the world via HTTP or HTTPS. You can use Blob storage to expose data publicly to the world, or to store application data privately. Max file in Blob Storage. 4.77 TB. Reference: https://docs.microsoft.com/en-us/azure/architecture/solution-ideas/articles/digital-media-video
Question 74
You are designing a SQL database solution. The solution will include 20 databases that will be 20 GB each and have varying usage patterns. You need to recommend a database platform to host the databases. The solution must meet the following requirements: - The solution must meet a Service Level Agreement (SLA) of 99.99% uptime. - The compute resources allocated to the databases must scale dynamically. - The solution must have reserved capacity. Compute charges must be minimized. What should you include in the recommendation?
A. an elastic pool that contains 20 Azure SQL databases
B. 20 databases on a Microsoft SQL server that runs on an Azure virtual machine in an availability set
C. 20 databases on a Microsoft SQL server that runs on an Azure virtual machine
D. 20 instances of Azure SQL Database serverless
The compute and storage redundancy is built in for business critical databases and elastic pools, with a SLA of 99.99%. Reserved capacity provides you with the flexibility to temporarily move your hot databases in and out of elastic pools (within the same region and performance tier) as part of your normal operations without losing the reserved capacity benefit. Reference: https://azure.microsoft.com/en-us/blog/understanding-and-leveraging-azure-sql-database-sla/
Question 75
HOTSPOT - You have an on-premises database that you plan to migrate to Azure. You need to design the database architecture to meet the following requirements: - Support scaling up and down. - Support geo-redundant backups. - Support a database of up to 75 TB. - Be optimized for online transaction processing (OLTP). What should you include in the design? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:
Box 1: Azure SQL Database - Azure SQL Database: Database size always depends on the underlying service tiers (e.g. Basic, Business Critical, Hyperscale). It supports databases of up to 100 TB with Hyperscale service tier model. Active geo-replication is a feature that lets you to create a continuously synchronized readable secondary database for a primary database. The readable secondary database may be in the same Azure region as the primary, or, more commonly, in a different region. This kind of readable secondary databases are also known as geo-secondaries, or geo-replicas. Azure SQL Database and SQL Managed Instance enable you to dynamically add more resources to your database with minimal downtime. Box 2: Hyperscale - Incorrect Answers: - SQL Server on Azure VM: geo-replication not supported. - Azure Synapse Analytics is not optimized for online transaction processing (OLTP). - Azure SQL Managed Instance max database size is up to currently available instance size (depending on the number of vCores). Max instance storage size (reserved) - 2 TB for 4 vCores - 8 TB for 8 vCores - 16 TB for other sizes Reference: https://docs.microsoft.com/en-us/azure/azure-sql/database/active-geo-replication-overview https://medium.com/awesome-azure/azure-difference-between-azure-sql-database-and-sql-server-on-vm-comparison-azure-sql-vs-sql-server-vm-cf02578a1188
Question 76
You are planning an Azure IoT Hub solution that will include 50,000 IoT devices. Each device will stream data, including temperature, device ID, and time data. Approximately 50,000 records will be written every second. The data will be visualized in near real time. You need to recommend a service to store and query the data. Which two services can you recommend? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.
A. Azure Table Storage
B. Azure Event Grid
C. Azure Cosmos DB SQL API
D. Azure Time Series Insights
D: Time Series Insights is a fully managed service for time series data. In this architecture, Time Series Insights performs the roles of stream processing, data store, and analytics and reporting. It accepts streaming data from either IoT Hub or Event Hubs and stores, processes, analyzes, and displays the data in near real time. C: The processed data is stored in an analytical data store, such as Azure Data Explorer, HBase, Azure Cosmos DB, Azure Data Lake, or Blob Storage. Reference: https://docs.microsoft.com/en-us/azure/architecture/data-guide/scenarios/time-series
Question 77
HOTSPOT - You have an Azure subscription that contains the SQL servers on Azure shown in the following table.
The subscription contains the storage accounts shown in the following table.
You create the Azure SQL databases shown in the following table.
For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area:
Box 1: Yes - Auditing works fine for a Standard account. Box 2: No - Auditing limitations: Premium storage is currently not supported. Box 3: No - Auditing limitations: Premium storage is currently not supported. Reference: https://docs.microsoft.com/en-us/azure/azure-sql/database/auditing-overview#auditing-limitations
Question 78
DRAG DROP - You plan to import data from your on-premises environment to Azure. The data is shown in the following table.
What should you recommend using to migrate the data? To answer, drag the appropriate tools to the correct data sources. Each tool may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Select and Place:
Box 1: Data Migration Assistant - The Data Migration Assistant (DMA) helps you upgrade to a modern data platform by detecting compatibility issues that can impact database functionality in your new version of SQL Server or Azure SQL Database. DMA recommends performance and reliability improvements for your target environment and allows you to move your schema, data, and uncontained objects from your source server to your target server. Incorrect: AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. Box 2: Azure Cosmos DB Data Migration Tool Azure Cosmos DB Data Migration Tool can used to migrate a SQL Server Database table to Azure Cosmos. Reference: https://docs.microsoft.com/en-us/sql/dma/dma-overview https://docs.microsoft.com/en-us/azure/cosmos-db/cosmosdb-migrationchoices
Question 79
You store web access logs data in Azure Blob Storage. You plan to generate monthly reports from the access logs. You need to recommend an automated process to upload the data to Azure SQL Database every month. What should you include in the recommendation?
A. Microsoft SQL Server Migration Assistant (SSMA)
B. Data Migration Assistant (DMA)
C. AzCopy
D. Azure Data Factory
You can create Data Factory pipelines that copies data from Azure Blob Storage to Azure SQL Database. The configuration pattern applies to copying from a file- based data store to a relational data store. Required steps: Create a data factory. Create Azure Storage and Azure SQL Database linked services. Create Azure Blob and Azure SQL Database datasets. Create a pipeline contains a Copy activity. Start a pipeline run. Monitor the pipeline and activity runs. Reference: https://docs.microsoft.com/en-us/azure/data-factory/tutorial-copy-data-dot-net
Question 80
You have an Azure subscription. Your on-premises network contains a file server named Server1. Server1 stores 5 ืยขืโ of company files that are accessed rarely. You plan to copy the files to Azure Storage. You need to implement a storage solution for the files that meets the following requirements: - The files must be available within 24 hours of being requested. - Storage costs must be minimized. Which two possible storage solutions achieve this goal? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.
A. Create an Azure Blob Storage account that is configured for the Cool default access tier. Create a blob container, copy the files to the blob container, and set each file to the Archive access tier.
B. Create a general-purpose v1 storage account. Create a blob container and copy the files to the blob container.
C. Create a general-purpose v2 storage account that is configured for the Cool default access tier. Create a file share in the storage account and copy the files to the file share.
D. Create a general-purpose v2 storage account that is configured for the Hot default access tier. Create a blob container, copy the files to the blob container, and set each file to the Archive access tier.
E. Create a general-purpose v1 storage account. Create a fie share in the storage account and copy the files to the file share.
To minimize costs: The Archive tier is optimized for storing data that is rarely accessed and stored for at least 180 days with flexible latency requirements (on the order of hours). Reference: https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-storage-tiers