Topic 2, Contoso Ltd, Case Study

   

Overview
This is a case study. Case studies are not timed separately. You can use as much exam
time as you would like to complete each case. However, there may be additional case
studies and sections on this exam. You must manage your time to ensure that you are able
to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information
that is provided in the case study. Case studies might contain exhibits and other resources
that provide more information about the scenario that is described in the case study. Each
question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review
your answers and to make changes before you move to the next section of the exam. After
you begin a new section, you cannot return to this section.
To start the case study
To display the first question in this case study, click the Next button. Use the buttons in the
left pane to explore the content of the case study before you answer the questions. Clicking
these buttons displays information such as business requirements, existing environment
and problem statements. If the case study has an All Information tab, note that the
information displayed is identical to the information displayed on the subsequent tabs.
When you are ready to answer a question, click the Question button to return to the
question.
Existing Environment
Contoso, Ltd. is a manufacturing company that produces outdoor equipment Contoso has
quarterly board meetings for which financial analysts manually prepare Microsoft Excel
reports, including profit and loss statements for each of the company's four business units,
a company balance sheet, and net income projections for the next quarter.
Data and Sources
Data for the reports comes from three sources. Detailed revenue, cost and expense data
comes from an Azure SQL database. Summary balance sheet data comes from Microsoft
Dynamics 365 Business Central. The balance sheet data is not related to the profit and
loss results, other than they both relate to dates.
Monthly revenue and expense projections for the next quarter come from a Microsoft
SharePoint Online list. Quarterly projections relate to the profit and loss results by using the
following shared dimensions: date, business unit, department, and product category.
Net Income Projection Data
Net income projection data is stored in a SharePoint Online list named Projections in the
format shown in the following table.

Which DAX expression should you use to get the ending balances in the balance sheet reports?

A. CALCULATE ( SUM( BalanceSheet [BalanceAmount] ), DATESQTD( 'Date'[Date] ) )

B. CALCULATE ( SUM( BalanceSheet [BalanceAmount] ), LASTDATE( 'Date'[Date] ) )

C. FIRSTNONBLANK ( 'Date' [Date] SUM( BalanceSheet[BalanceAmount] ) )

D. CALCULATE ( MAX( BalanceSheet[BalanceAmount] ), LASTDATE( 'Date' [Date] ) )

A.   CALCULATE ( SUM( BalanceSheet [BalanceAmount] ), DATESQTD( 'Date'[Date] ) )

Explanation:
A balance sheet report shows a company's financial position at a specific point in time. The "ending balance" for any given period (e.g., a month, quarter, or year) is the balance as of the last day of that period.
Let's analyze why this expression correctly captures that logic and why the others are incorrect for this specific financial requirement.

Why Option B is Correct:
LASTDATE( 'Date'[Date] ):
This time intelligence function returns the last date in the current filter context. For example, if a user is looking at the year 2023, LASTDATE will return December 31, 2023. This is exactly the point in time for which we need the balance sheet amount.

CALCULATE ( SUM( ... ), ... ):
The CALCULATE function is used to modify the filter context. It takes the sum of all BalanceAmount and then forces that calculation to be performed only for the date returned by LASTDATE. This correctly yields the ending balance for the period.

Why the Other Options Are Incorrect:
A. CALCULATE ( SUM( BalanceSheet [BalanceAmount] ), DATESQTD( 'Date'[Date] ) )
Incorrect: DATESQTD (Dates Quarter-To-Date) returns a table of all dates from the beginning of the quarter up to the last date in the current context. It does not return a single date. This expression would sum the balance amounts for all days in the quarter-to-date period, which is meaningless for a balance sheet. A balance sheet is a snapshot, not an accumulation over time.

C. FIRSTNONBLANK ( 'Date' [Date], SUM( BalanceSheet[BalanceAmount] ) )
Incorrect: The FIRSTNONBLANK function is designed to return the first value in a column that is not blank. It is often used in semi-additive scenarios but is completely wrong for an ending balance. An ending balance requires the last date, not the first. This would return the balance from the beginning of the period.

D. CALCULATE ( MAX( BalanceSheet[BalanceAmount] ), LASTDATE( 'Date' [Date] ) )
Incorrect:
While it correctly uses LASTDATE to filter to the end of the period, it uses the MAX aggregation function. This would find the single largest balance amount on that last day. However, a balance sheet has many accounts (Assets, Liabilities, Equity), and we need the actual balance for each account, not the maximum value among them. Using SUM is correct because it aggregates the amounts for the specific account and date in context.

Reference:
Core Concept:
This question tests the understanding of semi-additive measures and the correct use of time intelligence functions in DAX. Balance sheet amounts are a classic example of semi-additive data; they cannot be summed over time but represent a snapshot at a point in time.

You need to recommend a strategy to consistently define the business unit, department, and product category data and make the data usable across reports. What should you recommend?

A. Create a shared dataset for each standardized entity.

B. Create dataflows for the standardized data and make the dataflows available for use in all imported datasets.

C. For every report, create and use a single shared dataset that contains the standardized data.

D. For the three entities, create exports of the data from the Power Bl model to Excel and store the data in Microsoft OneDrive for others to use as a source.

E.

B.   Create dataflows for the standardized data and make the dataflows available for use in all imported datasets.

📘 Explanation:
To ensure consistent definitions of business unit, department, and product category across multiple Power BI reports, the best strategy is to use Power BI dataflows. Dataflows allow you to:
Centralize and standardize data transformation logic
Reuse cleaned, structured entities across multiple datasets and reports
Maintain consistency and reduce duplication
Enable governance and version control over shared dimensions
By creating dataflows for these entities, you ensure that all reports referencing them use the same definitions, hierarchies, and formatting.

Reference:
🔗 Microsoft Learn – Self-service data prep in Power BI with dataflows

❌ Why other options are incorrect:
A. Create a shared dataset for each standardized entity
This leads to fragmentation. Multiple datasets increase maintenance overhead and reduce reusability across models.

C. Use a single shared dataset for every report
This limits flexibility. Not all reports need the same data model, and coupling all reports to one dataset can cause performance and governance issues.

D. Export to Excel and store in OneDrive
This is manual, error-prone, and lacks version control. It’s not scalable or secure for enterprise-grade reporting.

📘 Summary:
Dataflows are the recommended approach for reusable, governed, and standardized data entities in Power BI. They support consistent modeling across reports and align with Microsoft’s best practices for scalable BI architecture.

Which two types of visualizations can be used in the balance sheet reports to meet the reporting goals? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

A. a line chart that shows balances by quarter filtered to account categories that are longterm liabilities.

B. a clustered column chart that shows balances by date (x-axis) and account category (legend) without filters.

C. a clustered column chart that shows balances by quarter filtered to account categories that are long-term liabilities.

D. a pie chart that shows balances by account category without filters.

E. a ribbon chart that shows balances by quarter and accounts in the legend

A.   a line chart that shows balances by quarter filtered to account categories that are longterm liabilities.
C.   a clustered column chart that shows balances by quarter filtered to account categories that are long-term liabilities.

Explanation:
The reporting goal for a balance sheet analysis, especially concerning long-term liabilities, is to show trends over time. Stakeholders need to see how these specific account balances have changed from one period to the next (e.g., quarter-to-quarter) to analyze debt repayment schedules, financial health, and leverage.
Let's analyze why these two visualizations are effective and why the others are not optimal for this specific goal.

Options A and C are Correct:
Both of these visualizations share the two critical components needed to meet the reporting goal:
They are filtered to the specific account categories of interest:
"long-term liabilities." This focuses the analysis on the relevant data.
They show data by quarter:
This provides the necessary time-series analysis to observe trends, increases, decreases, and patterns over multiple periods.

A. Line Chart:
This is the ideal visualization for showing trends over time. The connecting lines make it easy for the eye to follow the progression of each account category's balance across quarters.

C. Clustered Column Chart:
This is also very effective for comparing the values of different long-term liability accounts side-by-side for each quarter and for seeing the changes from one quarter to the next.

Why the Other Options Are Incorrect:
B. a clustered column chart that shows balances by date and account category without filters.
Incorrect: Showing the entire, unfiltered balance sheet (all assets, liabilities, and equity) in a single column chart by date would be far too cluttered and complex. It would be impossible to extract meaningful insights about long-term liabilities specifically. The lack of filtering makes this visualization unsuitable for the focused reporting goal.

D. a pie chart that shows balances by account category without filters.
Incorrect:A pie chart shows the proportion of parts to a whole at a single point in time. It is completely static and cannot show trends or changes over multiple quarters. It also suffers from the same problem as option B—it shows the entire balance sheet without focus, making it difficult to analyze the specific component of long-term liabilities.

E. a ribbon chart that shows balances by quarter and accounts in the legend.
Incorrect: While a ribbon chart can show data over time, its primary strength is illustrating how the ranking of categories changes. It is excellent for seeing which category is the "top" category in each period. For balance sheet amounts, the focus is on the absolute value and trend of specific accounts (like long-term liabilities), not on their changing rank against other accounts like cash or equity. It is a less effective and more specialized choice compared to a standard line or column chart for this scenario.

Reference:
Core Concept:
This question tests the knowledge of selecting appropriate visualizations based on the analytical goal. The key concepts are: Time-series analysis requires visuals that show data across a time axis (e.g., Line Charts, Column Charts). Filtering and Focus is necessary to avoid clutter and present clear, actionable insights for a specific business question.

You need to calculate the last day of the month in the balance sheet data to ensure that you can relate the balance sheet data to the Date table. Which type of calculation and which formula should you use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.



Explanation:
The goal is to create a date key that can form a proper relationship with a Date table. This is a data preparation task that should be done as early in the process as possible.

1. Why "An M custom column" is Correct:
Data Preparation vs. Data Modeling:
This task is about preparing the source data. Power Query (which uses the M language) is the ETL tool in Power BI designed for this exact purpose—data transformation and shaping before it is loaded into the data model.
Efficiency and Best Practice:
Creating this column in Power Query (M) is a one-time operation during data refresh. The resulting date is stored in the model, making relationships and filtering very efficient.
Why not a DAX calculated column? While a DAX calculated column could achieve a similar result, it is calculated after the data is loaded into the model and consumes precious VertiPaq (in-memory) engine resources. For a static, structural column like a date key, it is a best practice to create it in Power Query. DAX is better suited for dynamic, context-aware calculations (measures) or columns that depend on other model relationships.

2. Why Date.EndOfMonth(#date([Year], [Month], 1)) is Correct:
This M language formula breaks down into two logical steps:
#date([Year], [Month], 1):
This function constructs a valid Date value from the existing [Year] and [Month] columns. It uses the first day (1) of the month because we need a valid starting point. For example, for Year=2023 and Month=10, it creates the date October 1, 2023.
Date.EndOfMonth(...):
This function then takes that first-day date and returns the last day of its respective month. Continuing the example, it would take October 1, 2023 and return October 31, 2023.
This is the perfect result for relating to a Date table, as it gives a single, specific date for each Year/Month combination in your BalanceSheet data.

Why the Other Formulas Are Incorrect:
Date.EndOfQuarter(#date([Year], [Month], 1)):
This is incorrect because it returns the last day of the quarter, not the month. For example, for October 2023 (Q4), it would return December 31, 2023, which is not the correct ending date for the month of October.
ENDOFQUARTER(DATE('BalanceSheet'[Year],BalanceSheet[Month],1),0):
This is a DAX function, not an M function. It would be syntactically incorrect in a Power Query custom column. Furthermore, even in DAX, it suffers from the same logical flaw as the previous option—it calculates the end of the quarter, not the end of the month.

Reference:
Core Concept:
This question tests the understanding of the correct tool for the job: using Power Query (M) for data preparation and structural changes, and understanding basic M language date functions.

How should you distribute the reports to the board? To answer, select the appropriate
options in the answer area.
NOTE: Each correct selection is worth one point.


What is the minimum number of datasets and storage modes required to support the reports?

A. two imported datasets

B. a single DirectQuery dataset

C. two DirectQuery datasets

D. a single imported dataset

D.   a single imported dataset

📘 Explanation:
Power BI supports combining data from multiple sources into a single imported dataset, which is the most efficient and flexible approach for report development. Import mode allows:
Integration of data from multiple tables and sources
Fast performance due to in-memory storage
Full support for modeling, DAX, and visuals
You do not need multiple datasets unless there are strict isolation or access control requirements. A single imported dataset can handle all reporting needs if properly modeled.

Reference:
🔗 Microsoft Learn – Understand dataset storage modes
🔗 Microsoft Fabric Community – Minimum number of datasets and storage mode

❌ Why other options are incorrect:
A. Two imported datasets
→ Unnecessary duplication. One well-designed dataset is sufficient.

B. A single DirectQuery dataset
→ Slower performance, limited modeling capabilities, and depends on source system availability.

C. Two DirectQuery datasets
→ Adds complexity and latency. Not needed unless source systems must remain live and isolated.

📘 Summary:
Use a single imported dataset to support multiple reports efficiently. It offers the best performance, flexibility, and simplicity for most reporting scenarios.

Once the profit and loss dataset is created, which four actions should you perform in
sequence to ensure that the business unit analysts see the appropriate profit and loss
data? To answer, move the appropriate actions from the list of actions to the answer area
and arrange them in the correct order.


Explanation:
This process implements Row-Level Security (RLS) to ensure that when a business unit analyst views a report, they only see data for their specific business unit.

Step 1: From Power BI Desktop, create four roles.
Explanation:
This is the foundational step. You must first define the security roles within the data model itself. Since there are four business units, you would create four corresponding roles (e.g., "Role_UnitA", "Role_UnitB", etc.). This is done in Power BI Desktop using the "Manage Roles" dialog box.
Why it must be first: You cannot configure security for roles that do not yet exist. All RLS logic is built upon these role definitions.

Step 2: From Power BI Desktop, add a Table Filter DAX Expression to the roles.
Explanation:
After creating a role, you must define its security rules. This is done by writing a DAX expression that filters the data. For example, for a role meant to see only data for "UnitA", the DAX expression on the relevant table would be [BusinessUnit] = "UnitA" or [BusinessUnit] = "UnitA". This expression is applied for any user assigned to that role.
Why it must be second:
The filter expression is the core of the security rule and is a property of the role. Therefore, the role must be created (Step 1) before you can assign a filter to it.

Step 3: From Power BI Desktop, publish the dataset to powerbi.com.
Explanation:
The roles and their RLS rules are defined in the .pbix file. To make these security definitions available in the cloud service where users access reports, you must publish the file from Power BI Desktop to a workspace in the Power BI service.
Why it must be third: The security model is part of the dataset. You must deploy the dataset to the service before you can manage user assignments to the roles you just created and configured.

Step 4: From powerbi.com, add role members to the roles.
Explanation:
The final step is to map individual users or security groups to the roles you created. This is done in the workspace settings in the Power BI service. You select the dataset, go to its Security settings, and assign the business unit analysts to their respective roles.
Why it must be last and is correct:
Until this step is completed, the RLS rules are defined but not enforced on any specific user. Assigning a user to a role activates the DAX table filter for that user whenever they interact with reports based on the dataset.

Why the Other Action is Incorrect
From powerbi.com, assign the analysts the Contributor role to the workspace.
Explanation:
This action controls permissions to the workspace (e.g., allowing users to publish reports or modify content). It is unrelated to Row-Level Security (RLS), which controls which rows of data a user can see within a report. A user can be a Contributor, Member, or Viewer and still have RLS applied to them. Assigning a workspace role does not filter the data they see.

Reference:
Core Concept: This question tests the procedural knowledge for implementing Row-Level Security (RLS) in Power BI, which involves both Desktop (model definition) and Service (user management) steps.

You need to create a solution to meet the notification requirements of the warehouse
shipping department.
What should you do? To answer, select the appropriate options in the answer area.
NOTE: Each correct select is worth one point:



📘 Explanation:
To meet the notification requirements of the warehouse shipping department, the correct approach is to use a dashboard tile with a data alert. Power BI alerts can only be triggered from visuals pinned to dashboards—not from reports directly. This makes dashboards the required surface for alert-based automation.
Start by creating a card visualization that calculates the percentage of late orders. This visual should be pinned to a dashboard, which serves as the container for alert-enabled tiles. Once pinned, you can configure a data alert on the card tile to notify users when the percentage exceeds a defined threshold (e.g., 10%).
Data alerts in Power BI are designed for numeric tiles such as cards, KPIs, and gauges. They support email notifications and can be used to trigger flows in Power Automate for more advanced workflows. This setup ensures that warehouse staff are proactively notified when late orders spike, enabling timely operational response.

Reference:
🔗 Microsoft Learn – Set data alerts in Power BI
🔗 Microsoft Learn – Pin visuals to dashboards

❌ Why other options are incorrect:
Bookmark:
Bookmarks are used for navigation and storytelling within reports. They save the current view, filter state, and visual configuration but cannot trigger alerts. They are not tied to dashboards and have no notification capabilities.
Phone view:
This optimizes the layout of reports for mobile devices but does not support alert logic. It’s purely a formatting feature and has no role in data-driven notifications.
Report:
Alerts cannot be configured directly on report visuals. Power BI’s alert system is tied to dashboard tiles only, which means visuals must be pinned to a dashboard before alerts can be set. Attempting to configure alerts from within a report will not work.

📘 Summary:
To enable automated notifications for late orders, you must:
Create a card visual showing the percentage of late orders
Pin it to a dashboard
Configure a data alert on the dashboard tile

You need to design the data model and the relationships for the Customer Details
worksheet and the Orders table by using Power BI. The solution must meet the report requirements. For each of the following statement, select Yes if the statement is true, Otherwise, select No. NOTE: Each correct selection is worth one point.



Explanation:
Statement 1: A relationship must be created between the CustomerID column in the Customer Details worksheet and the CustomerID column in the Orders table.
Answer: Yes
Explanation:
his is a fundamental principle of the star schema design in Power BI. The Customer Details worksheet is a dimension table (containing descriptive attributes about customers), and the Orders table is a fact table (containing transactional data). To filter the order transactions by customer attributes or to bring customer details into an order-related report, a one-to-many relationship must exist between the dimension table (Customer Details[CustomerID]) and the fact table (Orders[CustomerID]). Without this relationship, the tables cannot interact, and the reporting requirements cannot be met.

Statement 2: The Data Type of the columns in the relationship between the Customer Details worksheet and the Orders table must be set to Text.
Answer: No
Explanation:
While the data types of the two columns used in a relationship must be identical, they do not have to be "Text." The CustomerID could logically be a Whole Number data type (an Integer). In fact, using a numeric data type is often more performant. The critical rule is data type consistency, not a specific data type. This statement is false because it incorrectly mandates the "Text" data type as the only option.

Statement 3: The Region field used to filter the Top Customers report must come from the Orders table.
Answer: No
Explanation:
This statement violates a core best practice of dimensional modeling. The Region is a descriptive attribute of a Customer. It should reside in the Customer Details dimension table, not in the Orders fact table. This design, known as "denormalization," allows for consistent filtering and prevents data redundancy and potential inconsistencies. If Region were stored in the Orders table, a single customer could potentially have multiple regions associated with it if data entry errors occurred, leading to inaccurate reports. Filtering by a column from the Customer Details table is not only possible but is the correct approach, as the relationship created in Statement 1 will propagate the filter from the dimension to the fact table.

Reference:
Core Concept: This question tests the understanding of dimensional modeling, specifically the principles of the star schema, the importance of correct data types for relationships, and the practice of storing descriptive attributes in dimension tables.

You need to create the dataset. Which dataset mode should you use?

A. DirectQuery

B. Import

C. Live connection

D. Composite

B.   Import

Explanation:
The scenario requires creating a dataset from multiple data sources (Excel worksheet, SQL Server table, and a CSV file) that need to be combined and shaped in Power Query before being loaded into the data model. The Import mode is the only one that supports this specific multi-source data mashup and transformation requirement.
Let's analyze why Import mode is the correct choice and why the other modes are not suitable:

Why Option B (Import) is Correct:
Multi-Source Data Mashup:
The Import mode allows you to connect to all three data sources (Excel, SQL Server, CSV) within Power Query Editor, perform necessary transformations (cleaning, merging, appending, calculated columns), and then load the final, integrated dataset into Power BI's in-memory engine (VertiPaq). This is the primary use case for Import mode.

Full Power Query Capabilities:
You have access to the complete set of data transformation tools in Power Query when using Import mode, which is essential for preparing the data from these disparate sources.

Performance:
Once loaded, reports and dashboards are extremely fast because they query the highly optimized, in-memory data model.

Why the Other Options Are Incorrect:
A. DirectQuery:
This mode does not load data into the model. Instead, it sends queries directly to the source database at report runtime. It is designed for a single, supported relational source (like the SQL Server table) and does not support combining data from multiple different source types (like Excel and CSV) into a single model. Power Query transformations are also severely limited in DirectQuery mode.

C. Live connection:
This mode is used to connect to an analysis services model (either Power BI Premium datasets, Azure Analysis Services, or SQL Server Analysis Services). It is not used to create a new dataset from raw source files and a database table. You connect live to an already-built semantic model.

D. Composite:
A composite model blends Import and DirectQuery modes. You might use this if, for example, you imported the Excel and CSV data but kept a very large SQL Server table in DirectQuery. While technically possible, it adds unnecessary complexity. The straightforward and most efficient solution for combining these three relatively small data sources is to use Import mode, which is the default and recommended approach for this scenario.

Reference:
Core Concept:
This question tests the understanding of dataset storage modes in Power BI and selecting the correct mode based on data source types and transformation requirements.

Page 2 out of 29 Pages
PL-300 Practice Test

Are You Truly Prepared?

Don't risk your exam fee on uncertainty. Take this definitive practice test to validate your readiness for the Microsoft PL-300 exam.