Topic 6: Misc. Questions

Note: This question is part of a series of questions that present the same scenario.
Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Azure Data Lake Storage account that contains a staging zone.
You need to design a daily process to ingest incremental data from the staging zone, transform the data by executing an R script, and then insert the transformed data into a data warehouse in Azure Synapse Analytics.
Solution: You use an Azure Data Factory schedule trigger to execute a pipeline that executes mapping data flow, and then inserts the data into the data warehouse.
Does this meet the goal?

A. Yes

B. No

B.   No


You have an Azure Synapse Analytics workspace named WS1 that contains an Apache Spark pool named Pool1.
You plan to create a database named DB1 in Pool1.
You need to ensure that when tables are created in DB1, the tables are available automatically as external tables to the built-in serverless SQL pool.
Which format should you use for the tables in DB1?

A. JSON

B. CSV

C. Parquet

D. ORC

C.   Parquet

Explanation:

Serverless SQL pool can automatically synchronize metadata from Apache Spark. A serverless SQL pool database will be created for each database existing in serverless Apache Spark pools.
For each Spark external table based on Parquet and located in Azure Storage, an external table is created in a serverless SQL pool database. As such, you can shut down your Spark pools and still query Spark external tables from serverless SQL pool.

You have an Azure SQL managed instance named SQL1 and two Azure web apps named App1 and App2.
You need to limit the number of IOPs that App2 queries generate on SQL1.
Which two actions should you perform on SQL1? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

A. Enable query optimizer fixes.

B. Enable Resource Governor.

C. Enable parameter sniffing.

D. Create a workload group.

E. Configure In-memory OLTP.

F. Run the Database Engine Tuning Advisor.

G. Reduce the Max Degree of Parallelism value.

B.   Enable Resource Governor.
C.   Enable parameter sniffing.

Your company uses Azure Stream Analytics to monitor devices.
The company plans to double the number of devices that are monitored.
You need to monitor a Stream Analytics job to ensure that there are enough processing resources to handle the additional load.
Which metric should you monitor?

A. Input Deserialization Errors

B. Late Input Events

C. Early Input Events

D. Watermark delay

D.   Watermark delay

Explanation:

The Watermark delay metric is computed as the wall clock time of the processing node minus the largest watermark it has seen so far.
The watermark delay metric can rise due to:
1. Not enough processing resources in Stream Analytics to handle the volume of input events.
2. Not enough throughput within the input event brokers, so they are throttled.

You have an Azure Databricks workspace named workspace1 in the Standard pricing tier.
Workspace1 contains an all-purpose cluster named cluster1.
You need to reduce the time it takes for cluster1 to start and scale up. The solution must minimize costs. What should you do first?

A. Upgrade workspace1 to the Premium pricing tier.

B. Configure a global init script for workspace1.

C. Create a pool in workspace1.

D. Create a cluster policy in workspace1.

C.   Create a pool in workspace1.

You are designing an anomaly detection solution for streaming data from an Azure IoT hub. The solution must meet the following requirements:
Send the output to an Azure Synapse.
Identify spikes and dips in time series data.
Minimize development and configuration effort.
Which should you include in the solution?

A. Azure SQL Database

B. Azure Databricks

C. Azure Stram Analytics

C.   Azure Stram Analytics

You deploy a database to an Azure SQL Database managed instance. You need to prevent read queries from blocking queries that are trying to write to the database. Which database option should set?

A. PARAMETERIZATION to FORCED

B. PARAMETERIZATION to SIMPLE

C. Delayed Durability to Forced

D. READ_COMMITTED_SNAPSHOT to ON

D.   READ_COMMITTED_SNAPSHOT to ON

Explanation:

In SQL Server, you can also minimize locking contention while protecting transactions from dirty reads of uncommitted data modifications using either:
The READ COMMITTED isolation level with the READ_COMMITTED_SNAPSHOT database option set to ON.
The SNAPSHOT isolation level.
If READ_COMMITTED_SNAPSHOT is set to ON (the default on SQL Azure Database), the Database Engine uses row versioning to present each statement with a transactionally consistent snapshot of the data as it existed at the start of the statement. Locks are not used to protect the data from updates by other transactions.

You are designing an enterprise data warehouse in Azure Synapse Analytics that will contain a table named Customers. Customers will contain credit card information.
You need to recommend a solution to provide salespeople with the ability to view all the entries in Customers.
The solution must prevent all the salespeople from viewing or inferring the credit card information.
What should you include in the recommendation?

A. row-level security

B. data masking

C. Always Encrypted

D. column-level security

B.   data masking

Explanation:

Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics support dynamic data masking. Dynamic data masking limits sensitive data exposure by masking it to non-privileged users.
The Credit card masking method exposes the last four digits of the designated fields and adds a constant string as a prefix in the form of a credit card.
Example:
XXXX-XXXX-XXXX-1234

You have an Azure subscription that contains an Azure Data Factory version 2 (V2) data factory named df1.
DF1 contains a linked service.
You have an Azure Key vault named vault1 that contains an encryption kay named key1.
You need to encrypt df1 by using key1.
What should you do first?

A. Disable purge protection on vault1.

B. Remove the linked service from df1.

C. Create a self-hosted integration runtime.

D. Disable soft delete on vault1.

B.   Remove the linked service from df1.

Explanation:

A customer-managed key can only be configured on an empty data Factory. The data factory can't contain any resources such as linked services, pipelines and data flows. It is recommended to enable customer-managed key right after factory creation.
Note: Azure Data Factory encrypts data at rest, including entity definitions and any data cached while runs are in progress. By default, data is encrypted with a randomly generated Microsoft-managed key that is uniquely assigned to your data factory.

You have an Azure SQL database named db1 on a server named server1.
The Intelligent Insights diagnostics log identifies that several tables are missing indexes.
You need to ensure that indexes are created for the tables.
What should you do?

A. RuntheDBCC SQLPERF command.

B. Run the dbcc dbreindex command.

C. Modify the automatic tuning settings for db1.

D. Modify the Query Store settings for db1.

C.   Modify the automatic tuning settings for db1.

Page 11 out of 34 Pages
DP-300 Practice Test Previous

Are You Truly Prepared?

Don't risk your exam fee on uncertainty. Take this definitive practice test to validate your readiness for the Microsoft DP-300 exam.