Microsoft AI-102 Practice Test Questions
Stop wondering if you're ready. Our AI-102 practice test is designed to identify your exact knowledge gaps. Validate your skills with questions that mirror the real exam's format and difficulty. Build a personalized study plan based on your AI-102 exam questions performance, focusing your effort where it matters most.
Targeted practice like this helps candidates feel significantly more prepared for Microsoft AI-102 exam day.
22550 already prepared
Updated On : 15-Dec-2025255 Questions
4.9/5.0
Topic 1: Wide World Importers
Case study
This is a case study. Case studies are not timed separately. You can use as much exam
time as you would like to complete each case. However, there may be additional case
studies and sections on this exam. You must manage your time to ensure that you are able
to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information
that is provided in the case study. Case studies might contain exhibits and other resources
that provide more information about the scenario that is described in the case study. Each
question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review
your answers and to make changes before you move to the next section of the exam. After
you begin a new section, you cannot return to this section.
To start the case study
To display the first question in this case study, click the Next button. Use the buttons in the
left pane to explore the content of the case study before you answer the questions. Clicking
these buttons displays information such as business requirements, existing environment,
and problem statements. If the case study has an All Information tab, note that the
information displayed is identical to the information displayed on the subsequent tabs.
When you are ready to answer a question, click the Question button to return to the
question.
Overview
Existing Environment
A company named Wide World Importers is developing an e-commerce platform.
You are working with a solutions architect to design and implement the features of the ecommerce platform. The platform will use microservices and a serverless environment built
on Azure.
Wide World Importers has a customer base that includes English, Spanish, and
Portuguese speakers.
Applications
Wide World Importers has an App Service plan that contains the web apps shown in the
following table.

You are planning the product creation project.
You need to build the REST endpoint to create the multilingual product descriptions.
How should you complete the URI? To answer, select the appropriate options in the
answer area.
NOTE: Each correct selection is worth one point.

Summary:
To build a REST endpoint for creating multilingual product descriptions, you are using the Azure Translator service. The base URI for the global translator endpoint is api.cognitive.microsofttranslator.com. The specific REST API path to perform text translation is /translate. Combined with the required api-version parameter, this forms the complete endpoint for the translation operation.
Correct Options:
The complete URI should be constructed as follows:
Base URI: api.cognitive.microsofttranslator.com
Path: /translate
Query String: ?api-version=3.0&to=es&to=pt
Explanation of the Correct Options:
api.cognitive.microsofttranslator.com:
This is the standard global endpoint for the Azure Translator service. It provides the highest availability and performance for translation requests and is the correct base URI to use.
/translate:
This is the specific API path for the Text Translation feature. It is the operation that takes text in one language and returns the translated text in one or more target languages.
?api-version=3.0&to=es&to=pt:
The api-version=3.0 is mandatory. The to parameter specifies the target language(s). The example to=es&to=pt correctly requests translation into both Spanish and Portuguese.
Incorrect Options:
api-nam.cognitive.microsofttranslator.com:
This is an example of a regional endpoint name (in this case, "North America"). While functional, the global endpoint is preferred unless you have a specific requirement to use a regional one.
westus.tts.speech.microsoft.com:
This is an endpoint for the Text-to-Speech service, not the Translator service. It is used for converting text into spoken audio, not for translating text between languages.
wwics.cognitiveservices.azure.com/translator:
This is a legacy endpoint format for Cognitive Services. The modern, recommended endpoint for the dedicated Translator resource is api.cognitive.microsofttranslator.com.
/languages: This API path is used to retrieve a list of supported languages by the Translator service. It is a discovery operation, not one that performs translations.
/text-to-speech: This path is associated with the Speech service for synthesis, not with the Translator service for text translation.
Reference:
Microsoft Official Documentation: Translator v3.0 Reference - Translate - This document details the exact endpoint and parameters: POST https://api.cognitive.microsofttranslator.com/translate?api-version=3.0
You are developing the shopping on-the-go project.
You need to build the Adaptive Card for the chatbot.
How should you complete the code? To answer, select the appropriate options in the
answer area.
NOTE: Each correct selection is worth one point.

Summary:
This Adaptive Card needs to display dynamic, localized content based on a language variable. The correct syntax involves using the language variable as a dynamic key to access the correct localized string from objects like name and image.allText. For conditional visibility, the $when property uses a logical expression to show an element only when the stockLevel is not 'OK'. The color property uses a semantic color name.
Correct Options:
The code should be completed with the following selections:
name[language]
"$when": "$[stockLevel != 'OK']"
image.allText[language]
Explanation of the Correct Options:
name[language]:
This is the correct way to dynamically access a property. The name object is expected to have properties for each language (e.g., name.en, name.es, name.fr). Using bracket notation [language] allows the language variable's value (e.g., 'en') to be used to select the correct localized name, such as name['en'].
"$when": "$[stockLevel != 'OK']":
The $when property controls the visibility of an element. This expression means "show this TextBlock when the stockLevel is not equal to 'OK'". This is used to display a warning message only when the stock level is low or out of stock.
image.allText[language]:
Similar to the product name, this uses the dynamic language variable to access the alt text or description for the image in the correct language from the image.allText object.
Incorrect Options:
if(language == 'en', 'en', name) / name / name.en:
These are incorrect for dynamic localization. The if statement is invalid Adaptive Card template syntax. Using just name would output the entire object, and name.en would hardcode the language to English, ignoring the user's language setting.
"$when": "$[stockLevel == 'OK']":
This would show the element when the stock level is OK, which is the opposite of the typical use case for a warning message.
"$when": "$[stockLevel.OK]":
This is invalid syntax. The $when property requires a logical expression, not just a property path.
image.allText.en / image.allText.language / image.allText["language"]:
These are all incorrect for dynamic access. image.allText.en hardcodes English. image.allText.language and image.allText["language"] look for a property literally named "language" instead of using the variable's value as the key.
Reference:
Microsoft Official Documentation: Adaptive Cards Templating SDK - Data Binding - The documentation explains how to use the dot and indexer syntax for data binding, which is the basis for the name[language] and image.allText[language] selections.
Microsoft Official Documentation: Adaptive Cards Templating - $when - Details the use of the $when property for conditional visibility.
You need to develop code to upload images for the product creation project. The solution
must meet the accessibility requirements.
How should you complete the code? To answer, select the appropriate options in the
answer area.
NOTE: Each correct selection is worth one point.

Summary:
To meet accessibility requirements, you need to generate descriptive alt text for images. The Azure Computer Vision service's AnalyzeImageAsync method can provide a textual description of an image. You must request the Description visual feature, and then extract the most confident caption from the result to use as the alt text.
Correct Options:
The code should be completed with the following selections:
VisualFeatureTypes.Description
var c = results.Description.Captions[0]
Explanation of the Correct Options:
VisualFeatureTypes.Description:
This is the specific visual feature that instructs the Computer Vision service to generate a human-readable phrase or sentence that describes the image's content. This is the direct source for creating alt text.
var c = results.Description.Captions[0]:
The Description property of the result contains a Captions list. This list is ordered by confidence, with the most confident caption first ([0]). This primary caption is the best candidate for alt text, as it represents the service's most reliable description of the entire image.
Incorrect Options:
VisualFeatureTypes.ImageType:
This feature detects whether an image is clip art or a line drawing. It does not generate a descriptive caption for the image's content.
VisualFeatureTypes.Objects:
This feature detects and locates physical objects within the image (e.g., "chair," "person"). While useful, it provides a list of individual objects rather than a coherent, descriptive sentence for the entire scene, making it less suitable for direct use as alt text.
VisualFeatureTypes.Tags:
This feature generates a list of relevant keywords or tags associated with the image (e.g., "grass," "outdoor," "dog"). Like Objects, it provides a collection of terms but not a fluent descriptive phrase.
var c = results.Brands.DetectedBrands[0]:
This is used to detect commercial logos and brand names within the image. It is irrelevant for generating a general description of the image for accessibility.
var c = results.Metadata[0]:
The Metadata property contains technical information about the image, such as its dimensions and format, not a descriptive caption.
var c = results.Objects[0]:
This would access the first detected object in the image (e.g., a "tree"), not a descriptive caption of the entire scene.
Reference:
Microsoft Official Documentation: Describe Image - Computer Vision - This document explains how the service "generates a description of an image in human-readable language with complete sentences." It confirms that the description is accessed via the Description feature and its Captions collection.
You are developing the shopping on-the-go project.
You are configuring access to the QnA Maker resources.
Which role should you assign to AllUsers and LeadershipTeam? To answer, select the
appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Summary:
Role assignment in QnA Maker depends on the required level of access. For the AllUsers group, which likely includes general end-users of the chatbot, the principle of least privilege dictates granting only the ability to query the knowledge base, which is provided by the QnA Maker Reader role. For the LeadershipTeam, which may need to review, test, and improve the knowledge base, a higher level of access is needed, granted by the QnA Maker Editor role, which allows for querying and updating the knowledge base but not full resource management.
Correct Options:
AllUsers: QnA Maker Read
LeadershipTeam: QnA Maker Editor
Explanation of the Correct Options:
AllUsers: QnA Maker Read:
This role is designed for applications and users that only need to query the knowledge base to get answers. It provides the necessary permissions to call the generateAnswer API but does not allow for any modifications to the knowledge base's content or structure. This is the most secure and appropriate role for general user access.
LeadershipTeam: QnA Maker Editor:
This role is intended for knowledge base authors and managers. It grants permissions to query the knowledge base, create new knowledge bases, and, most importantly for a leadership or content team, update and manage the content of existing knowledge bases (e.g., adding, editing, or deleting QnA pairs). It does not grant permissions to manage the underlying Azure resources, which is a security best practice.
Incorrect Options:
Cognitive Services User:
This is a general role for Azure Cognitive Services. While it might work for querying, it is not the specific, purpose-built role for QnA Maker and may grant broader access than intended to other cognitive services within the same resource group.
Contributor:
This is a high-privilege Azure Resource Manager role. It allows management of the entire QnA Maker resource itself (e.g., deleting the resource, changing pricing tiers) but does not inherently grant data plane access to the knowledge base content. It is overly permissive for both user groups.
Owner:
This is the highest-privilege Azure Resource Manager role. It includes all permissions of the Contributor role plus the ability to manage role assignments (RBAC). This is far too powerful for either group and violates the principle of least privilege.
Reference:
Microsoft Official Documentation: Access control in QnA Maker - This document explicitly defines the purpose-built QnA Maker roles: "The QnA Maker Reader role is meant for production endpoint users who only need to access the knowledge base... The QnA Maker Editor role is meant for the KB authors to create, edit, and manage the KB."
You are developing the smart e-commerce project.
You need to implement autocompletion as part of the Cognitive Search solution.
Which three actions should you perform? Each correct answer presents part of the
solution. (Choose three.)
NOTE: Each correct selection is worth one point.
A.
Make API queries to the autocomplete endpoint and include suggesterName in the body.
B.
Add a suggester that has the three product name fields as source fields.
C.
Make API queries to the search endpoint and include the product name fields in the
searchFields query parameter.
D.
Add a suggester for each of the three product name fields
E.
Set the searchAnalyzer property for the three product name variants.
F.
Set the analyzer property for the three product name variants.
Make API queries to the autocomplete endpoint and include suggesterName in the body.
B.
Add a suggester that has the three product name fields as source fields.
F.
Set the analyzer property for the three product name variants.
Summary:
To implement autocomplete, you must first define a suggester in your index, which acts as the engine for type-ahead queries. This suggester should include all the product name fields you want to use for suggestions. Finally, you call the dedicated autocomplete REST API endpoint, specifying the name of the suggester you created to retrieve the completion suggestions.
Correct Options:
B. Add a suggester that has the three product name fields as source fields.
F. Set the analyzer property for the three product name variants.
A. Make API queries to the autocomplete endpoint and include suggesterName in the body.
Explanation of the Correct Options:
B. Add a suggester that has the three product name fields as source fields.:
This is the foundational step. A suggester is a search construct that defines which fields in an index support type-ahead completion. You create a single suggester and assign the relevant product name fields to it as source fields.
F. Set the analyzer property for the three product name variants.:
The analyzer property determines how text is tokenized and processed during indexing. For autocomplete to work effectively, the fields used in the suggester must be analyzed consistently. Setting a standard analyzer (like standard.lucene or en.microsoft) ensures the terms are broken down in a way that the suggester can match partial queries. This is a prerequisite for the suggester to function correctly.
A. Make API queries to the autocomplete endpoint and include suggesterName in the body.:
Once the index with the suggester is built, you use the specific /docs/autocomplete POST API endpoint. The request body must include the suggesterName parameter to tell the service which suggester to use for generating the completion list, along with the partial search text.
Incorrect Options:
C. Make API queries to the search endpoint and include the product name fields in the searchFields query parameter.:
The /search endpoint is used for full search queries that return full documents. While it can return suggestions via the suggest parameter, the question specifically asks for autocompletion, which is handled by the dedicated /autocomplete endpoint for a more lightweight type-ahead experience.
D. Add a suggester for each of the three product name fields.:
This is inefficient and incorrect. You create a single suggester and assign multiple source fields to it. Creating a separate suggester for each field is unnecessary and complicates the API calls, as you can only specify one suggesterName per autocomplete request.
E. Set the searchAnalyzer property for the three product name variants.:
The searchAnalyzer is used at query time. For suggester-based features like autocomplete and suggestions, the indexing analyzer (set by the analyzer property) is the critical one because the suggester's dictionary is built during the indexing process. The searchAnalyzer is less relevant for this specific feature.
Reference:
Microsoft Official Documentation: Add suggesters to build autocomplete and suggested results in a query
Microsoft Official Documentation: Create suggester using the REST API - See the "Suggesters" section in the request body.
Microsoft Official Documentation: Autocomplete API
You are developing the smart e-commerce project.
You need to design the skillset to include the contents of PDFs in searches.
How should you complete the skillset design diagram? To answer, drag the appropriate
services to the correct stages. Each service may be used once, more than once, or not at
all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.

Summary:
To include PDF contents in searches, you use Azure Cognitive Search's AI enrichment pipeline. The Source is where the original PDF files are stored, which is Azure Blob Storage. The Cracking stage is the process of extracting text and structure from the PDFs, which is performed by the built-in document cracking capability of the indexer. The Preparation stage involves using Cognitive Skills, like the Language Understanding API, to enrich the raw text with deeper semantic understanding (e.g., detecting entities, key phrases). The enriched data is then sent to the Destination, which is the search index, typically stored in a service like Azure Cosmos DB or kept within the search service itself.
(Note: The diagram's "Destination" is ambiguous. The primary destination for a search skillset is the Search Index. Azure Cosmos DB is a less common destination but can be used. Given the available options, it is the most logical fit.)
Correct Options:
Source: Azure Blob storage
Cracking: Azure Blob storage
Preparation: Language Understanding API
Destination: Azure Cosmos DB
Explanation of the Correct Options:
Source: Azure Blob storage:
This is the origin of the data. PDF files are typically stored in an Azure Blob Storage container, which the search indexer connects to.
Cracking: Azure Blob storage:
This stage refers to document cracking, which is the process where the Azure Cognitive Search indexer extracts text and metadata from the PDF files. While the service performing the action is the indexer, the data source being cracked is the content from Azure Blob Storage. In the context of this diagram, the service associated with this stage is the source of the documents being cracked.
Preparation: Language Understanding API:
After the raw text is extracted from the PDFs (cracking), it is sent through a skillset for enrichment. The Language service (which encompasses the Language Understanding API) provides skills like entity recognition, key phrase extraction, and sentiment analysis, which add valuable, searchable structure to the unstructured PDF text.
Destination: Azure Cosmos DB:
The final enriched and structured data needs to be stored. While the most common destination is the Azure Cognitive Search index itself, the indexer can also project the data into an Azure Cosmos DB database, making it a valid destination for the processed information.
Incorrect Options:
Custom Vision API:
This service is for image classification and object detection. It is not used for processing the textual content of PDFs.
Azure Files:
While a storage service, Azure Files (SMB shares) is not a standard data source for an Azure Cognitive Search indexer. Azure Blob Storage is the preferred and most common source for documents like PDFs.
Translator API:
This service translates text from one language to another. It could be used in a skillset for translation, but it is not a core service for the fundamental task of extracting and enriching text from PDFs for search.
Computer Vision API:
This service analyzes visual content in images. If a PDF contains images, this could be used with an OCR skill to extract text from those images, but for a standard PDF with primarily digital text, the built-in document cracking and text-based cognitive skills are more direct and efficient.
Reference:
Microsoft Official Documentation: AI enrichment in Azure Cognitive Search - This outlines the overall pipeline of source, cracking, enrichment (preparation), and destination.
Microsoft Official Documentation: Built-in document cracking in Azure Cognitive Search - Explains how the indexer extracts content from source documents like PDFs.
Microsoft Official Documentation: Built-in cognitive skills for text and image processing - Lists skills from the Language service and Computer Vision service that are used in the enrichment (preparation) stage.
You are planning the product creation project.
You need to recommend a process for analyzing videos.
Which four actions should you perform in sequence? To answer, move the appropriate
actions from the list of actions to the answer area and arrange them in the correct order.
(Choose four.)

1. Upload the video to blob storage:
This is the foundational step. The video file must be placed in a cloud storage service that the Video Indexer can access. Azure Blob Storage is the standard and recommended service for storing such unstructured data.
2. Analyze the video by using the Video Indexer API:
Once the video is in storage, you initiate the core processing. The Video Indexer API is a purpose-built service that uses multiple Azure Cognitive Services to analyze audio and video tracks, extracting insights like faces, keywords, sentiments, and crucially, a text transcript.
3. Extract the transcript from the Video Indexer API:
After the analysis is complete, you retrieve the specific output you need. In this case, you call the Video Indexer API to get the textual transcript that was generated from the video's audio.
4. Send the transcript to the Language Understanding API as an utterance:
With the raw transcript text available, you can now perform deeper language analysis. Sending this text to the Language Understanding (LUIS) or Conversational Language Understanding (CLU) API allows it to be interpreted as a user utterance, enabling you to identify intents and entities for further application logic.
Incorrect Options:
Index the video by using the Video Indexer API:
This is redundant with "Analyze the video." The analysis process performed by Video Indexer inherently creates an index of the video's content.
Analyze the video by using the Computer Vision API:
The Computer Vision API is designed for analyzing still images, not video files. Video Indexer is the correct service for video content.
Extract the transcript from Microsoft Stream:
Microsoft Stream is a video service, but the process should use the centralized Azure services. Video Indexer is the tool that performs the transcript extraction.
Translate the transcript by using the Translator API:
While a possible step, it is not part of the core sequence for analyzing the video's content for understanding. It is an optional step that would come after extraction if translation were needed.
Upload the video to file storage:
Azure Blob Storage is the standard, scalable object storage for such scenarios. Azure Files is typically used for file shares and is not the conventional choice for feeding content into cognitive services like Video Indexer.
Reference:
Microsoft Official Documentation: What is Azure Video Indexer? - Explains that Video Indexer analyzes videos to extract insights, including transcripts.
Microsoft Official Documentation: Upload and index your videos - Details the process starting from uploading a video to indexing it.
You need to develop an extract solution for the receipt images. The solution must meet the document processing requirements and the technical requirements.
You upload the receipt images to the From Recognizer API for analysis, and the API returns the following JSON.

Which expression should you use to trigger a manual review of the extracted information by
a member of the Consultant-Bookkeeper group?
A.
documentResults.docType == "prebuilt:receipt"
B.
documentResults.fields.".confidence < 0.7
C.
documentResults.fields.ReceiptType.confidence > 0.7
D.
documentResults.fields.MerchantName.confidence < 0.7
documentResults.fields.".confidence < 0.7
Summary:
The technical requirement is to trigger a manual review when confidence in extracted data is low. The JSON response shows individual confidence scores for each extracted field. To flag a receipt for review, you need an expression that checks if the confidence score for any critical field falls below a specific threshold. The MerchantName field has a high confidence (0.913), but the ReceiptType field has a low confidence (0.672), making it a candidate for review.
Correct Option:
B. documentResults.fields.*.confidence < 0.7
This expression is the most robust and correct choice. The asterisk (*) acts as a wildcard, checking the confidence score for every field within the fields collection. If any field (e.g., ReceiptType, MerchantName, TransactionDate, etc.) has a confidence score below 0.7, the condition will be true, triggering a manual review. This ensures that any low-confidence extraction, not just from a single predefined field, is flagged.
Incorrect Options:
A. documentResults.docType == "prebuilt:receipt":
This expression only checks if the analyzed document was identified as a receipt. It does not evaluate the confidence of the extracted data and would trigger for every single receipt, regardless of extraction quality.
C. documentResults.fields.ReceiptType.confidence > 0.7:
This expression checks for high confidence in the ReceiptType field. It would trigger an action when confidence is high, which is the opposite of what is needed for a manual review. Furthermore, it ignores other important fields like MerchantName or Total.
D. documentResults.fields.MerchantName.confidence < 0.7:
While this logic is correct (checking for low confidence), it is too narrow. It only checks the MerchantName field. In the provided JSON, the MerchantName confidence is 0.913 (high), so this receipt would not be flagged for review, even though the ReceiptType confidence is low (0.672). A comprehensive solution should check all fields.
Reference:
Microsoft Official Documentation: Form Recognizer Confidence Scores - The documentation explains that a confidence score is provided for each extracted value and that these scores can be used to review inputs that have low confidence.
You are developing the document processing workflow.
You need to identify which API endpoints to use to extract text from the financial
documents. The solution must meet the document processing requirements.
Which two API endpoints should you identify? Each correct answer presents part of the
solution.
NOTE: Each correct selection is worth one point.
A.
/vision/v3.2/read/analyzeResults
B.
/formrecognizer/v2.0/prebuilt/receipt/analyze
C.
/vision/v3.2/read/analyze
D.
/vision/v3.2/describe
E.
/formercognizer/v2.0/custom/models{modelId}/ analyze
/formrecognizer/v2.0/prebuilt/receipt/analyze
A.
/vision/v3.2/read/analyzeResults
Summary:
To extract text from financial documents, you need endpoints for both unstructured text (like letters) and structured documents (like receipts). The Computer Vision Read API is optimized for reading unstructured text in images and documents. The Form Recognizer Prebuilt Receipt API is specifically designed to extract structured data (like merchant name, date, total) from receipts, which is a key financial document type.
Correct Options:
A. /vision/v3.2/read/analyzeResults
B. /formrecognizer/v2.0/prebuilt/receipt/analyze
Explanation of the Correct Options:
A. /vision/v3.2/read/analyzeResults:
This endpoint is part of the Computer Vision Read API. You first submit a document to the /vision/v3.2/read/analyze endpoint to start the asynchronous analysis. You then use this analyzeResults endpoint to retrieve the results, which include all the text lines and words extracted from the document, along with their location information. This is ideal for unstructured financial documents like letters or invoices where you need raw text.
B. /formrecognizer/v2.0/prebuilt/receipt/analyze:
This is the endpoint for the Form Recognizer service's prebuilt receipt model. It is specifically trained to understand the layout of receipts. It not only extracts raw text but also the structured fields (e.g., MerchantName, TransactionDate, Total) with semantic understanding, which is crucial for processing financial documents like receipts.
Incorrect Options:
C. /vision/v3.2/read/analyze:
This is the initial endpoint to submit a document for analysis to the Computer Vision Read API. However, it only returns an Operation-Location header. It does not return the extracted text itself. The actual results containing the text are retrieved from the analyzeResults endpoint (option A).
D. /vision/v3.2/describe:
This endpoint is for the Computer Vision "Describe Image" feature, which generates a human-readable caption of what is seen in an image (e.g., "a person riding a bike"). It is not designed for optical character recognition (OCR) and text extraction from documents.
E. /formrecognizer/v2.0/custom/models/{modelId}/analyze:
This endpoint is for analyzing documents with a custom Form Recognizer model that you have trained yourself on your own specific document layout. While powerful, it is not necessary for common financial documents like receipts, for which a prebuilt model already exists. Using the prebuilt receipt model (option B) minimizes development effort.
Reference:
Microsoft Official Documentation: Computer Vision Read API v3.2 - Details the two-step process using the analyze and analyzeResults endpoints.
Microsoft Official Documentation: Form Recognizer Prebuilt Receipt model - Explains the use of the prebuilt receipt model and its endpoint.
You are developing the knowledgebase by using Azure Cognitive Search.
You need to build a skill that will be used by indexers.
How should you complete the code? To answer, select the appropriate options in the
answer area.
NOTE: Each correct selection is worth one point.

Summary:
The Entity Recognition skill extracts predefined categories of entities from text. To configure it, you must specify which entity categories you want to extract. The skill then returns the identified entities in its outputs. The standard outputs for this skill are persons, organizations, and locations when the corresponding categories are selected.
Correct Options:
The code should be completed with the following selections:
"categories": [ "Locations", "Persons", "Organizations" ],
{ "name": "persons", "targetName": "people" },
{ "name": "organizations", "targetName": "organizations" }
Explanation of the Correct Options:
"categories": [ "Locations", "Persons", "Organizations" ],:
This defines the types of entities the skill should identify. The EntityRecognitionSkill v1 supports these three specific categories. "Email" is not a valid category for this version of the skill (it is available in the newer PiiDetectionSkill).
{ "name": "persons", "targetName": "people" },:
This output mapping takes the entities identified in the "Persons" category from the skill's internal output (which is always called persons) and projects them into the enrichment tree under the name people. The targetName is an optional alias.
{ "name": "organizations", "targetName": "organizations" }:
Similarly, this maps the "Organizations" category from the skill's internal output (organizations) to the enrichment tree. Here, the targetName is the same as the default output name.
Incorrect Options:
"categories": [ ],:
An empty categories array is invalid. The skill must be told which types of entities to look for.
"categories": ["Email", "Persons", "Organizations"],:
"Email" is not a supported category for the EntityRecognitionSkill. It is a supported category for the PiiDetectionSkill, which is a different skill used for detecting personally identifiable information.
{ "name": "entities" } / { "name": "categories" } / { "name": "namedEntities" }:
These are not valid output names for the EntityRecognitionSkill. The skill has specific, predefined output names corresponding to the selected categories: persons, locations, organizations, and entities (a combined collection).
Reference:
Microsoft Official Documentation: Entity Recognition Cognitive Skill - The document lists the supported categories for the skill: "person", "location", "organization". It also details the available outputs: "persons", "locations", "organizations", and "entities".
| Page 1 out of 26 Pages |
Microsoft AI-102 Exam Details
Exam Code: AI-102
Exam Name: Microsoft Azure AI Engineer Associate Exam
Certification Name: Microsoft Certified Azure AI Engineer Associate Certification
Certification Provider: Microsoft