Free Microsoft DP-900 Practice Test Questions MCQs

Stop wondering if you're ready. Our Microsoft DP-900 practice test is designed to identify your exact knowledge gaps. Validate your skills with Microsoft Azure Data Fundamentals questions that mirror the real exam's format and difficulty. Build a personalized study plan based on your free DP-900 exam questions mcqs performance, focusing your effort where it matters most.

Targeted practice like this helps candidates feel significantly more prepared for Microsoft Azure Data Fundamentals exam day.

2980+ already prepared
Updated On : 3-Mar-2026
98 Questions
Microsoft Azure Data Fundamentals
4.9/5.0

Page 1 out of 10 Pages

Which command-line tool can you use to query Azure SQL databases?

A. sqlcmd

B. bcp

C. azdata

D. Azure CLI

A.   sqlcmd

Explanation:
The question asks which command-line tool can be used to query Azure SQL databases. Querying involves executing Transact-SQL (T-SQL) statements to retrieve or manipulate data within the database. While multiple tools interact with Azure services or data, only one is specifically designed to connect directly to a database engine and run interactive queries.

Correct Option:

A. sqlcmd
sqlcmd is a command-line utility that enables users to connect to and execute T-SQL queries against SQL Server instances, including Azure SQL Database. It allows you to run SELECT statements, execute stored procedures, and perform various database operations directly from the command prompt. This makes it the appropriate tool for querying data stored in Azure SQL Database.

Incorrect Option:

B. bcp
The Bulk Copy Program (bcp) is a utility designed for bulk exporting data from or bulk importing data into SQL Server tables. It focuses on high-volume data movement rather than executing ad-hoc queries. You cannot use bcp to run interactive SELECT statements or explore data within tables.

C. azdata
azdata is a command-line tool written in Python for managing and administering SQL Server Big Data Clusters. It is used for tasks like managing HDFS, Spark jobs, and controllers within big data environments, not for querying standard Azure SQL databases.

D. Azure CLI
Azure CLI is Microsoft's cross-platform command-line tool for managing Azure resources. It is used for tasks such as creating resource groups, provisioning SQL servers, and configuring firewalls. It does not include commands to connect to a database and execute T-SQL queries against the data.

Reference:
sqlcmd utility documentation, bcp utility documentation, azdata overview, Azure CLI documentation

Your company recently reported sales from the third quarter.
You have the chart shown in the following exhibit.

Which type of analysis is shown for the fourth quarter?

A. predictive

B. prescription

C. descriptive

D. diagnostic

A.   predictive

Explanation:
The question shows a chart with sales data for the first three quarters (1.5M, 2.45M, 3.59M) and a projected value for the fourth quarter (610,000). The exhibit is asking what type of analysis is being shown for the fourth quarter value, which appears to be a future projection rather than historical data.

Correct Option:

A. predictive
Predictive analysis uses historical data to forecast future outcomes. The fourth quarter value of 610,000 is likely a prediction based on the trend from the first three quarters. This type of analysis answers "what will happen?" by identifying patterns in historical data and projecting them forward. It uses statistical models and machine learning techniques to estimate future values.

Incorrect Option:

B. prescription
Prescriptive analysis recommends specific actions to achieve desired outcomes. It answers "what should we do?" by suggesting optimal decisions. The chart simply shows a projected value without recommending any actions, so this is not prescriptive analysis.

C. descriptive
Descriptive analysis summarizes historical data to answer "what happened?" The first three quarters show actual sales data (descriptive), but the fourth quarter is a projection, not historical fact.

D. diagnostic
Diagnostic analysis examines data to understand why something happened. It answers "why did it happen?" by identifying causes and correlations. The fourth quarter value is forward-looking, not diagnostic.

Reference:
Types of data analysis (descriptive, diagnostic, predictive, prescriptive) in Microsoft Learn

You manage an application that stores data in a shared folder on a Windows server.

You need to move the shared folder to Azure Storage.

Which type of Azure Storage should you use?

A. queue

B. blob

C. file

D. table

B.   blob

Explanation:
The question involves moving a shared folder from a Windows server to Azure Storage. A shared folder typically contains files organized in directories and accessed over a network. The key is identifying which Azure Storage service can best accommodate this migration while maintaining the ability to store and access files in a folder-like structure.

Correct Option:

B. blob
Azure Blob Storage is optimized for storing massive amounts of unstructured data, including files of any type. It organizes data into containers that function similarly to folders, making it suitable for migrating shared folder contents. Blob storage offers scalable, cost-effective storage for files and integrates well with applications through REST APIs and SDKs. It is a common choice for migrating file data to the cloud.

Incorrect Option:

A. queue
Azure Queue Storage is a messaging service for storing messages that can be accessed from anywhere. It is used for decoupling application components and enabling asynchronous communication. It cannot store files or replicate folder structures from a shared folder.

C. file
Azure Files provides fully managed SMB file shares in the cloud. While it can replace a Windows shared folder, the question does not specify that SMB protocol access is required. Blob storage is often preferred for application data migration due to its scalability and cost-effectiveness.

D. table
Azure Table Storage is a NoSQL key-value store for structured semi-structured data. It is designed for storing large amounts of non-relational data like user information or logs. It does not support file storage or folder hierarchies needed for a shared folder.

Reference:
Introduction to Azure Blob Storage, Azure Storage documentation

Your company is designing a data store that will contain student data. The data has the
following format.

Which type of data store should you use?

A. graph

B. key/value

C. object

D. columnar

B.   key/value

Explanation:
The exhibit shows student data where each student number (key) has multiple associated attributes like name, email, phone, and MCP ID. Different students have different sets of attributes, and the data is organized with a key (StudentNumber) pointing to various values. This structure is characteristic of NoSQL data stores where flexible schemas allow different attributes per record.

Correct Option:

B. key/value
Key/value stores are designed to store data as collections of key-value pairs, where each key is unique and maps directly to its associated values. In this exhibit, StudentNumber acts as the key, and all student attributes (first name, last name, email, etc.) represent the value associated with that key. Key/value stores excel at handling varying attributes per key without requiring predefined schemas.

Incorrect Option:

A. graph
Graph databases are designed for highly connected data with complex relationships, such as social networks or recommendation engines. The student data shown is simple attribute storage without relationships between students, making graph databases unnecessarily complex and inappropriate.

C. object
Object storage is optimized for storing large unstructured binary objects like images, videos, and documents. It organizes data into containers with metadata but is not designed for storing structured records with multiple attributes per key like student information.

D. columnar
Columnar databases store data by columns rather than rows, optimizing for analytical queries and aggregations over large datasets. The student data shown is transactional record storage, not analytical data requiring column-based optimization.

Reference:
Understanding NoSQL databases, Key-value store documentation, Azure Cosmos DB data models

For each of the following statements, select Yes if the statement is true. Otherwise, select
No. NOTE: Each correct selection is worth one point.




Explanation:
This question tests understanding of the differences between Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) database offerings in Azure. PaaS provides managed database services where Microsoft handles infrastructure, operating system, and software updates, while IaaS gives you virtual machines where you manage everything yourself. Understanding these responsibilities is crucial for Azure data services.

Correct Option:

Statement 1: Platform as a service (PaaS) database offerings in Azure require less setup and configuration effort than infrastructure as a service (IaaS) database offerings.

Answer: Yes
PaaS database offerings like Azure SQL Database are fully managed services. Microsoft handles infrastructure provisioning, operating system installation, patching, and database software setup. With IaaS, you must manually configure the VM, install the database software, and manage all settings yourself. PaaS significantly reduces administrative overhead and setup time compared to IaaS.

Statement 2: Platform as a service (PaaS) database offerings in Azure provide administrators with the ability to control and update the operating system version.

Answer: No
In PaaS database offerings, Microsoft manages the underlying operating system entirely. Administrators cannot access or modify the OS, including controlling version updates or applying patches. These responsibilities are abstracted away to provide a fully managed service. Only with IaaS do you have full control over the operating system.

Statement 3: All platform as a service (PaaS) database offerings in Azure can be paused to reduce costs.

Answer: No
Not all PaaS database offerings support pausing to reduce costs. While some services like Azure SQL Data Warehouse (now Synapse Analytics) support pause/resume functionality, others like Azure SQL Database single databases do not offer this feature. You can stop or delete resources, but pausing is not universally available across all PaaS database services.

Reference:
PaaS vs IaaS comparison, Azure SQL Database documentation, Managed instance documentation, Azure database services overview

What are three characteristics of an Online Transaction Processing (OLTP) workload?
Each correct answer presents a complete solution. (Choose three.)
NOTE: Each correct selection is worth one point.

A. denormalized data

B. heavy writes and moderate reads

C. light writes and heavy reads

D. schema on write

E. schema on read

F. normalized data

B.   heavy writes and moderate reads
D.   schema on write
E.   schema on read

Explanation:
Online Transaction Processing (OLTP) systems are designed to handle high volumes of real-time transactions from multiple users. They prioritize data integrity, consistency, and efficient write operations. Understanding the workload characteristics of OLTP versus analytical systems is essential for designing appropriate data solutions in Azure.

Correct Option:

B. heavy writes and moderate reads
OLTP systems process daily business transactions like orders, payments, and inventory updates. These workloads involve frequent INSERT, UPDATE, and DELETE operations, resulting in heavy write activity. While reads occur, they are typically moderate and involve retrieving specific records rather than large-scale data analysis.

D. schema on write
OLTP systems enforce schema on write, meaning data must conform to a predefined structure before being stored. This ensures data validation, integrity, and consistency at the time of insertion. Strict schema enforcement supports ACID properties and maintains relational database norms essential for transaction processing.

E. schema on read
In OLTP systems, schema on read is not applicable. This characteristic belongs to big data and analytical solutions where data is stored first and structure is applied when queried. OLTP requires immediate schema validation during write operations to maintain transaction integrity.

Incorrect Option:

A. denormalized data
Denormalization introduces data redundancy to improve read performance by reducing joins. This is characteristic of analytical systems (OLAP), not OLTP. OLTP systems prioritize write efficiency and data integrity through normalization.

C. light writes and heavy reads
This describes read-intensive workloads typical of reporting or data warehouse systems. OLTP systems handle substantial write operations from transaction processing, making heavy reads with light writes incorrect.

F. normalized data
While OLTP typically uses normalized data, this option was not selected in your answer. Normalization reduces redundancy and maintains data integrity during frequent write operations in transaction processing systems.

Reference:
OLTP vs OLAP comparison, Transaction processing concepts, Azure data architecture patterns

For each of the following statements, select Yes if the statement is true. Otherwise, select
No. NOTE: Each correct selection is worth one point.




Explanation:
This question tests understanding of Microsoft Power BI dashboard concepts and capabilities. Dashboards are single-page canvases that display visualizations (tiles) and provide a unified view of key metrics. Knowing dashboard relationships with workspaces, data sources, and visualization limitations is essential for Power BI data visualization certification objectives.

Correct Option:

Statement 1: A Microsoft Power BI dashboard is associated with a single workspace.

Answer: Yes
A Power BI dashboard exists within a specific workspace and cannot span multiple workspaces. When you create a dashboard, it belongs to the workspace where it was created, whether that's "My Workspace" (personal) or a shared workspace. This association helps organize content and manage permissions effectively within the Power BI service.

Statement 2: A Microsoft Power BI dashboard can only display visualizations from a single dataset.

Answer: No
Power BI dashboards can display visualizations (tiles) from multiple datasets and even different reports. You can pin visualizations from various reports, each potentially based on different datasets, to a single dashboard. This flexibility allows you to create comprehensive dashboards that combine metrics from multiple data sources.

Statement 3: A Microsoft Power BI dashboard can display visualizations from a Microsoft Excel workbook.
Answer: Yes
Power BI can connect to Microsoft Excel workbooks as data sources. You can import Excel data into Power BI and create visualizations that can be pinned to dashboards. Additionally, you can pin elements directly from Excel workbooks uploaded to Power BI, making Excel a valid source for dashboard visualizations.

Reference:
Power BI dashboards introduction, Workspaces in Power BI, Power BI data sources documentation

You need to use Transact-SQL to query files in Azure Data Lake Storage from an Azure
Synapse Analytics data warehouse.
What should you use to query the files?

A. Azure Functions

B. Microsoft SQL Server Integration Services (SSIS)

C. PolyBase

D. Azure Data Factory

C.   PolyBase

Explanation:
The question asks about querying files stored in Azure Data Lake Storage using Transact-SQL from within an Azure Synapse Analytics data warehouse. This requires technology that bridges the gap between relational query language (T-SQL) and external file storage, enabling seamless querying of unstructured data without moving it.

Correct Option:

C. PolyBase
PolyBase is a technology in Azure Synapse Analytics that enables T-SQL queries to access and combine data from external sources, including Azure Data Lake Storage, Blob Storage, and Hadoop. It allows you to query files (CSV, Parquet, etc.) directly using standard T-SQL statements without moving or copying the data. PolyBase creates external tables that reference the file location, making the data appear as regular relational tables for querying.

Incorrect Option:

A. Azure Functions
Azure Functions is a serverless compute service for running event-driven code without managing infrastructure. It is used for building microservices and processing data, but it does not provide T-SQL query capabilities or native integration with Synapse Analytics for querying Data Lake files.

B. Microsoft SQL Server Integration Services (SSIS)
SSIS is an ETL (Extract, Transform, Load) tool for data integration and workflow automation. While it can move data from Data Lake to Synapse, it requires building packages and moving data rather than directly querying files in place using T-SQL as requested.

D. Azure Data Factory
Azure Data Factory is a cloud-based ETL and data integration service used to orchestrate data movement and transformation. It can copy data from Data Lake to Synapse or run pipelines, but it does not enable direct T-SQL querying of external files from within the data warehouse.

Reference:
PolyBase in Azure Synapse Analytics, Query data in Azure Data Lake using PolyBase, Azure Synapse Analytics documentation

Match the Azure Cosmos DB APIs to the appropriate data structures.
To answer, drag the appropriate API from the column on the left to its data structure on the
right. Each API may be used once, more than once, or not at all.
NOTE: Each correct match is worth one point.




Explanation:
Azure Cosmos DB is a multi-model database service that supports multiple data models and APIs. Each API in Cosmos DB is designed to work with specific data structures and query patterns. Understanding which API corresponds to which data model is essential for designing appropriate data solutions in Azure Cosmos DB.

Correct Matches:

Gremlin API → Graph data
The Gremlin API in Azure Cosmos DB is designed for graph data models. It uses the Apache TinkerPop graph traversal language Gremlin to query and manage graph data consisting of vertices (nodes) and edges (relationships). This API is ideal for scenarios involving highly connected data like social networks, recommendation engines, or fraud detection systems where relationships between entities are complex and need efficient traversal.

MongoDB API → JSON documents
The MongoDB API in Azure Cosmos DB provides wire protocol compatibility with MongoDB, making it suitable for JSON document data structures. This API allows applications built for MongoDB to connect to Cosmos DB seamlessly. Document databases store semi-structured data as JSON documents, making them ideal for content management, catalogs, and user profiles where schema flexibility is required.

Table API → Key/value data
The Table API in Azure Cosmos DB is designed for key/value data structures. It provides similar functionality to Azure Table Storage but with additional Cosmos DB capabilities. This API stores data as collections of items with key-based access, making it ideal for scenarios requiring fast point reads and writes based on partition and row keys, such as metadata storage, device information, or user session data.

Not Used:

Cassandra API
The Cassandra API in Azure Cosmos DB is designed for wide-column data structures, which is not listed in the answer area options. It provides compatibility with Apache Cassandra, supporting column-family data models for large-scale, high-velocity workloads.

Reference:
Azure Cosmos DB overview, Cosmos DB API compatibility, Gremlin API documentation, MongoDB API documentation, Table API documentation

Match the terms to the appropriate descriptions.
To answer, drag the appropriate term from the column on the left to its description on the
right. Each term may be used once, more than once, or not at all.
NOTE: Each correct match is worth one point




Explanation:
This question tests fundamental database concepts regarding different database objects and their purposes. Understanding the distinctions between tables, views, and indexes is essential for working with relational databases in Azure data services. Each object serves a specific function in data storage, retrieval, and performance optimization.

Correct Matches:

Table → A database object that holds data
A table is the fundamental database object that physically stores data in rows and columns. It represents an entity (like customers or products) and contains the actual data records. Tables are the primary storage structures in relational databases where data persists permanently until explicitly modified or deleted. Every database must have at least one table to store meaningful data.

View → A database object whose content is defined by a query
A view is a virtual table based on the result set of a SELECT query. It does not store data physically but presents data from one or more tables in a customized format. Views can simplify complex queries, provide security by restricting access to specific columns, and present data in a user-friendly manner without duplicating storage.

Index → A database object that helps improve the speed of data retrieval
An index is a performance optimization object that speeds up data retrieval operations (SELECT queries) by providing quick access paths to table data. Similar to a book index, it creates a data structure (usually B-tree) that allows the database engine to find rows quickly without scanning entire tables. Indexes improve read performance but add overhead to write operations.

Reference:
SQL Server database objects, Tables in relational databases, Views in SQL Server, Indexes in SQL Server documentation

Page 1 out of 10 Pages
123

Microsoft Azure Data Fundamentals Practice Exam Questions