Topic 2: Contoso, Ltd.Case Study

   

This is a case study Case studies are not timed separately. You can use as much exam
time as you would like to complete each case. However, there may be additional case
studies and sections on this exam. You must manage your time to ensure that you are able
to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information
that is provided in the case study. Case studies might contain exhibits and other resources
that provide more information about the scenario that is described in the case study. Each
question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review
your answers and to make changes before you move to the next section of the exam. After
you begin a new section, you cannot return to this section.
To start the case study
To display the first question in this case study, click the Next button. Use the buttons in the
left pane to explore the content of the case study before you answer the questions. Clicking
these buttons displays information such as business requirements, existing environment,
and problem statements. If the case study has an All Information tab. note that the
information displayed is identical to the information displayed on the subsequent tabs.
When you are ready to answer a question, click the Question button to return to the
question.
General Overview
Contoso, Ltd. is an international accounting company that has offices in France. Portugal,
and the United Kingdom. Contoso has a professional services department that contains the
roles shown in the following table.

• RBAC role assignments must use the principle of least privilege.
• RBAC roles must be assigned only to Azure Active Directory groups.
• Al solution responses must have a confidence score that is equal to or greater than 70
percent.
• When the response confidence score of an Al response is lower than 70 percent, the
response must be improved by human input.
Chatbot Requirements
Contoso identifies the following requirements for the chatbot:
• Provide customers with answers to the FAQs.
• Ensure that the customers can chat to a customer service agent.
• Ensure that the members of a group named Management-Accountants can approve the
FAQs.
• Ensure that the members of a group named Consultant-Accountants can create and
amend the FAQs.
• Ensure that the members of a group named the Agent-CustomerServices can browse the
FAQs.
• Ensure that access to the customer service agents is managed by using Omnichannel for
Customer Service.
• When the response confidence score is low. ensure that the chatbot can provide other
response options to the customers.
Document Processing Requirements
Contoso identifies the following requirements for document processing:
• The document processing solution must be able to process standardized financial
documents that have the following characteristics:
• Contain fewer than 20 pages.
• Be formatted as PDF or JPEG files.
• Have a distinct standard for each office.
• The document processing solution must be able to extract tables and text from the
financial documents.
• The document processing solution must be able to extract information from receipt
images.
• Members of a group named Management-Bookkeeper must define how to extract tables
from the financial documents.
• Members of a group named Consultant-Bookkeeper must be able to process the financial
documents.
Knowledgebase Requirements
Contoso identifies the following requirements for the knowledgebase:
• Supports searches for equivalent terms
• Can transcribe jargon with high accuracy
• Can search content in different formats, including video
• Provides relevant links to external resources for further research

https://selfexamtraining.com/uploadimages/image_2026-04-03_144411142.png






Explanation
The function provision_resource creates a Cognitive Services resource. The parameters are: name, kind, tier, location (in that order based on the call signature). To create a Computer Vision resource in East US with the S1 pricing tier, the correct order is "res1", "ComputerVision", "eastus", "S1". However, the function expects (name, kind, tier, location), so the arguments must match that order.

Correct Option
provision_resource("res1", "ComputerVision", "eastus", "S1") – assuming the function signature is (name, kind, tier, location). But the answer area shows options like "eastus", "S1" after the kind. Looking at the provided answer choices, the correct match is:

Kind: ComputerVision

Location: "eastus"

Tier: "S1"

From the answer area, the correct selection is:
provision_resource("res1", "ComputerVision", "eastus", "S1")

Why Other Options Are Incorrect
CustomVision.Prediction / CustomVision.Training – Not Computer Vision; these are for Custom Vision.

FormRecognizer – Different service.

"useast" – Invalid region (should be "eastus").

"S0" – Free tier, not S1.

Reference
Microsoft Learn: Cognitive Services SKUs – Computer Vision uses "ComputerVision" kind, "S1" tier.

You have 100,000 documents.

You are building an app that will identify city names in each document by using Azure Al Language.

You need to test the detection client.

How should you complete the code? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.




Explanation
The task is to identify city names using Azure AI Language. The correct client for named entity recognition (NER) is TextAnalyticsClient. The response contains entities with a Category property; for city names, you check if the Category equals "City" or "Location" (depending on the model version). The code should filter on Category.

Correct Options

First blank (client declaration): TextAnalyticsClient client
Named entity recognition (NER) for cities is part of the Text Analytics (Azure AI Language) client library. The other clients (DocumentAnalysisClient, FormRecognizerClient, QuestionAnsweringClient) are for different services.

Second blank (after entity.): Category
The RecognizeEntities method returns a collection of CategorizedEntity objects. Each entity has a Category property (e.g., "Person", "Location", "City"). You compare entity.Category to "City" to filter city names.

Why Other Options Are Incorrect

DocumentAnalysisClient / FormRecognizerClient – For Document Intelligence (form extraction), not text entity recognition.

QuestionAnsweringClient – For QnA Maker / custom question answering.

KeyValuePairs – Not a property of CategorizedEntity.

SubCategory – Available for some entities (e.g., "USState" under "Location"), but not the primary category.

TextAppearance – For handwriting detection, not category filtering.

Reference
Microsoft Learn: Text Analytics – Entity Recognition – Use TextAnalyticsClient.RecognizeEntities.

You have an Azure subscription that contains an Azure Al Language service resource named Resourcel. You query Resourcel by running a cURL command and receive the following response.



For each of the following statements, select Yes if the statement is true. Otherwise, select

No. NOTE: Each correct selection is worth point.




Explanation
The response shows a "category": "Person" entity with "confidenceScore": 1. This is the Named Entity Recognition (NER) output, not PII detection (which would have categories like "PhoneNumber", "Email", "SSN"). A confidence score of 1 (100%) indicates high confidence, not low confidence. The request URL shown (/contentsafety) is for Content Safety, not Language service NER.

Correct Answers

Statement 1: Resource1 was queried by using Personally Identifiable Information (PII) detection.
No – The category "Person" is from standard NER, not PII detection. PII detection categories include "PhoneNumber", "Email", "SSN", etc. The response lacks PII‑specific categories.

Statement 2: The response indicates that Resource1 has low confidence in the accuracy of the identified entity.
No – "confidenceScore": 1 means 100% confidence, which is high confidence, not low.

Statement 3: The request URL includes the following string: https://resource1.cognitiveservices.azure.com/contentsafety.
No – The /contentsafety endpoint is for Azure AI Content Safety, not for Language service NER. The correct endpoint for NER is /language/.../entities/recognition/general or similar.

Reference
Microsoft Learn: Language service NER – Returns categories like Person, Location, Organization with confidence scores.

You have an Azure OpenAI model named All.

You are building a web app named App1 by using the Azure OpenAI SDK You need to configure App1 to connect to All What information must you provide?

A. the endpoint, key, and model name

B. the deployment name, endpoint. and key

C. the endpoint, key, and model type

D. the deployment name, key, and model name

B.   the deployment name, endpoint. and key

Explanation
To connect to an Azure OpenAI model using the Azure OpenAI SDK, you must provide the endpoint (resource URL), the API key (subscription key for authentication), and the deployment name (the name you gave when deploying the model, e.g., "gpt-35-turbo-deployment"). The model name alone is insufficient because multiple deployments of the same model can exist.

Correct Option

B. the deployment name, endpoint, and key

Endpoint – The resource URL (e.g., https://your-resource.openai.azure.com/).

Key – The API key for authentication (passed in the api-key header).

Deployment name – Identifies which deployed model instance to call (e.g., gpt-35-turbo or gpt-4 deployment).

Why Other Options Are Incorrect

A. the endpoint, key, and model name – The model name (e.g., "gpt-35-turbo") is not enough; you need the deployment name, which can be different from the model name.

C. the endpoint, key, and model type – "Model type" is vague and not a required parameter in the SDK.

D. the deployment name, key, and model name – Missing the endpoint; the SDK requires the endpoint to locate the resource.

Reference
Microsoft Learn: Azure OpenAI SDK – Authentication – Requires endpoint, key, and deployment name.

You create a knowledge store for Azure Cognitive Search by using the following JSON.



Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE Each correct selection is worth one point.




Explanation
The JSON defines a "projections" array containing one projection group (the group that includes tables, objects, and files). The "files": [] array is empty, but the "objects" section specifies a "storageContainer" for saving layout text to blob storage. Images are not saved because "files": [] is empty and no projection explicitly saves images.

Correct Answers

First blank: one projection group
The "projections" array has a single object (the group containing tables, objects, and files). That counts as one projection group.

Second blank: not be saved
The "files" array is empty ("files": []), meaning no file projections are defined. The "objects" projection saves text (/document/normalized_images/*/layoutText) but not the actual images. Therefore, images are not saved.

Why Other Options Are Incorrect

Two / four projection groups – Only one group is present.
be saved to a blob container / file storage / Azure Data Lake – No file projection is configured, and objects save only text, not images.

Reference
Microsoft Learn: Knowledge store projections – A projection group can contain tables, objects, and files. Empty "files": [] means no file projections.

You have the following files:

• File1.pdf

• File2.jpg

• File3.docx

• File4.webp

• FileS.png

Which files can you analyze by using Azure Al Content Understanding?

A. File1 .pdf and File3.docx only

B. File1.pdf, File2jpg, and File5.png only

C. File1.pdf, File2.jpg. and File3.docx only

D. File1.pdf, File2.jpg. File3.docx, and FileS.png only

E. File1.pdf. File2.jpg, File3.docx. File4.webp, and File5.png

D.   File1.pdf, File2.jpg. File3.docx, and FileS.png only

Explanation
Azure AI Content Understanding supports common document and image formats. Based on typical supported formats for document analysis services (e.g., Document Intelligence, Content Understanding), PDF, JPG, DOCX, and PNG are supported. WEBP may not be universally supported in all preview or general availability versions, so it is excluded from the correct answer.

Correct Option

D. File1.pdf, File2.jpg, File3.docx, and File5.png only

PDF (File1) – Standard document format, supported.

JPG (File2) – Common image format, supported.

DOCX (File3) – Microsoft Word format, supported.

PNG (File5) – Common image format, supported.

WEBP (File4) – Not universally supported in all Azure AI Content Understanding implementations, so it is excluded.

Why Other Options Are Incorrect

A – Missing JPG and PNG, which are supported.

B – Missing DOCX, which is supported.

C – Missing PNG, which is supported.

E – Includes WEBP, which is typically not supported.

Reference
Microsoft Learn: Azure AI Content Understanding – Supported formats – PDF, JPG, PNG, DOCX, and others; WEBP may not be listed.

You have a product support manual.

You need to build a product support chatbot based on the manual. The solution must minimize development effort and costs.

What should you use?

A. Azure Al Phi-3-medium with fine-tuning

B. Azure A1 Language Custom question answering

C. Azure OpenA1 GPI-4 with grounding data that uses Azure Al Search

D. Azure Al Document Intelligence

B.   Azure A1 Language Custom question answering

You are building a Chatbot by using the Microsoft Bot Framework SDK. The bot will be used to accept food orders from customers and allow the customers to customize each food item. You need to configure the bot to ask the user for additional input based on the type of item ordered. The solution must minimize development effort. Which two types of dialogs should you use? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

A. adaptive

B. action

C. waterfall

D. prompt

E. input

C.   waterfall
D.   prompt

Explanation
The scenario requires gathering multiple inputs (item type, then customizations), but the type of customizations depends on the item ordered. A waterfall dialog lets you execute sequential steps and pass data between them, so after you know the item you can decide which prompt dialogs (e.g., choice prompt, text prompt) to use for the relevant customizations. This keeps development straightforward and reusable.

Correct Options

C. waterfall
A waterfall dialog runs a series of steps one after another. In this bot, you can first ask for the food item, then based on that item, proceed to the next step where you ask for customizations. Waterfall dialogs make it easy to branch or change subsequent prompts dynamically.

D. prompt
Prompt dialogs (e.g., choice prompt, text prompt, confirm prompt) are designed to ask a question, validate the response, and reprompt if needed. After the bot knows the item type, you can call the appropriate prompt(s) to collect each customization (e.g., size, toppings, special instructions). Prompts reduce the amount of validation and re-prompting logic you must write manually.

Incorrect Options

A. adaptive – Adaptive dialogs are more powerful but also more complex. They are overkill for this relatively linear ordering flow and would increase development effort compared to waterfall + prompts.

B. action – “Action” is not a standard dialog type in Bot Framework. Actions are part of adaptive dialogs or other frameworks, not a standalone dialog for simple sequential input.

E. input – “Input” is not a built‑in dialog type. Bot Framework provides prompt dialogs for collecting input, not a generic “input” dialog.

You have an Azure subscription that contains an Azure Al Document Intelligence resource named Aldoc1.

You have an app named App1 that uses Aldoc1. App1 analyzes business cards by calling business card model v2.1.

You need to update App1 to ensure that the app can interpret QR codes. The solution must minimize administrative effort.

What should you do first?

A. Deploy a custom model.

B. Implement the read model.

C. Upgrade the business card model to v3.0

D. Implement the contract model

C.   Upgrade the business card model to v3.0

Explanation:
QR code interpretation is a feature introduced in Document Intelligence's prebuilt-businessCard model version 3.0 and later. By upgrading from v2.1 to v3.0, the business card model gains the ability to read QR codes without deploying a custom model or changing to a different model type. This minimizes administrative effort.

Correct Option:

C. Upgrade the business card model to v3.0
Version 3.0 of the business card model (now often part of prebuilt-layout or prebuilt-document) includes enhanced extraction capabilities, including QR code reading. Upgrading the model version in App1's API call (changing the API version and model ID) is the simplest change to add QR code support.

Incorrect Options:

A. Deploy a custom model. –
Creating a custom model requires labeling training data and training. This is high administrative effort and unnecessary when the pre-built model already supports QR codes in newer versions.

B. Implement the read model. –
The read model extracts text but does not specifically interpret QR codes as structured data. It would treat a QR code as a text string, not as a QR code entity.

D. Implement the contract model. –
The contract model is for analyzing contracts (parties, dates, obligations). It does not interpret QR codes on business cards.

Reference:
Microsoft Learn: "Document Intelligence – Business card model v3.0" – Supports QR code extraction.

You have an Azure OpenAI model.

You have 500 prompt-completion pairs that will be used as training data to fine-tune the model.

You need to prepare the training data.

Which format should you use for the training data file?

A. XML

B. JSONL

C. CSV

D. TSV

B.   JSONL

Explanation:
Azure OpenAI fine-tuning requires training data in JSONL (JSON Lines) format, where each line is a separate JSON object containing prompt and completion fields (or messages for chat models). JSONL is efficient for streaming large datasets and is the standard format for fine-tuning with OpenAI.

Correct Option:

B. JSONL
JSON Lines (.jsonl) is the required format for Azure OpenAI fine-tuning. Each line is a valid JSON object. For completion models, each line contains {"prompt": "...", "completion": "..."}. For chat models, each line contains {"messages": [{"role": "user", "content": "..."}, {"role": "assistant", "content": "..."}]}.

Incorrect Options:

A. XML –
XML is not supported for Azure OpenAI fine-tuning training data. The API expects JSONL format.

C. CSV –
CSV is not a supported format. While you could convert CSV to JSONL, the training API does not accept CSV directly.

D. TSV –
TSV (Tab-Separated Values) is not supported. The fine-tuning API specifically requires JSONL.

Reference:
Microsoft Learn: "Azure OpenAI – Fine-tuning" – Training data must be in JSONL format.

Page 2 out of 35 Pages