Topic 5: Describe features of conversational AI workloads on Azure
You are building a chatbot that will use natural language processing (NLP) to perform the
following actions based on the text input of a user:
• Accept customer orders.
• Retrieve support documents.
• Retrieve order status updates.
Which type of NLP should you use?
A. sentiment analysis
B. translation
C. language modeling
D. named entity recognition
Summary:
This question focuses on identifying the correct Natural Language Processing (NLP) technique for extracting specific, key pieces of information from user text. The chatbot needs to identify and pull out concrete data points—like product names (for orders), document titles (for support), and order numbers (for status)—from a user's sentence to trigger the correct action.
Correct Option:
D. Named Entity Recognition (NER)
Named Entity Recognition (NER) is the correct choice. It is an NLP technique designed specifically to identify and categorize key information (entities) in text into predefined categories such as names, organizations, locations, dates, quantities, and more. In this scenario, NER would be used to extract:
Product names and quantities from a customer order.
Document titles or keywords to retrieve the correct support file.
Order numbers or customer IDs to look up status updates.
Incorrect Option:
A. Sentiment Analysis
Sentiment analysis is used to determine the emotional tone or opinion in text (e.g., positive, negative, neutral). It is excellent for gauging customer satisfaction but cannot extract specific data points like order numbers or product names from a user's request.
B. Translation
Translation converts text from one language to another. While useful for multilingual chatbots, it does not perform the task of identifying and extracting specific entities from the user's input to perform actions.
C. Language Modeling
Language modeling is a broad concept where a model learns the probability of word sequences to generate coherent text. While it is the foundation for many NLP tasks (including modern chatbots), the specific technique required here to pinpoint and classify key data is Named Entity Recognition (NER). NER is a concrete application built upon language models.
Reference:
What is Named Entity Recognition (NER)? - Azure AI Language
A. Azure Al Language Service
B. Face
C. Azure Al Translator
D. Azure Al Custom Vision
Explanation:
QnA Maker is a cloud-based API service that lets you create a conversational question and-
answer layer over your existing data. Use it to build a knowledge base by extracting
questions and answers from your semi structured content, including FAQs, manuals, and
documents. Answer users’ questions with the best answers from the QnAs in your
knowledge base—automatically. Your knowledge base gets smarter, too, as it
continually learns from user behavior.
Select the answer that correctly completes the sentence
Summary:
This question tests your understanding of Microsoft's Responsible AI principles. The scenario describes a safeguard to prevent the AI system from making a prediction when its input data is flawed (unusual or missing). This is a measure to ensure the system operates correctly and safely, preventing erroneous or potentially harmful outputs that could result from poor-quality data.
Correct Option:
a reliability and safety
This is the correct principle. Reliability and safety require that AI systems perform consistently and correctly, even in the face of errors or unexpected inputs. By refusing to provide a prediction when critical data is missing or anomalous, the system is demonstrating operational safety. It prevents making a potentially unreliable and unsafe decision that could have negative consequences, thereby upholding this principle.
Incorrect Option:
an inclusiveness
Inclusiveness focuses on ensuring AI systems are fair and work well for all people, regardless of ability, gender, sexuality, ethnicity, or other characteristics. It addresses bias and fairness, not data quality or operational safeguards for missing data.
a privacy and security
This principle concerns the protection of personal data from unauthorized access and ensuring data is used in accordance with privacy laws. While important, it does not directly address the system's behavior when faced with unusual or missing input values.
a transparency
Transparency involves understanding how an AI model makes decisions (interpretability) and being open about its capabilities and limitations. It is about explaining why a prediction was made, not about implementing a safety check to withhold a prediction due to bad input data.
Reference:
Microsoft Responsible AI Principles - Reliability & Safety
For each of the following statements, select Yes if the statement is true. Otherwise, select
No.
NOTE: Each correct selection is worth one point.
Summary:
This question assesses your understanding of the core capabilities and integration features of the Azure Bot Service. It's important to know that this service is designed specifically for creating conversational AI agents, that it can be enhanced with other AI services, and that it has built-in functionality for handling common support queries via FAQs.
Correct Option:
Azure Bot Service and Azure Cognitive Services can be integrated.
Answer: Yes
Explanation:
This is true and a key strength of the Azure AI ecosystem. The Azure Bot Service is designed to be extended by integrating with various Azure Cognitive Services. For example, you can integrate with the Language service for question answering, use Speech service for voice interactions, or employ Translator to make the bot multilingual.
Azure Bot Service engages with customers in a conversational manner.
Answer: Yes
Explanation:
This is the primary purpose of the Azure Bot Service. It provides a framework for building, testing, and deploying conversational AI agents (bots) that can interact with users through natural language on platforms like websites, Microsoft Teams, and Telegram.
Azure Bot Service can import frequently asked questions (FAQ) to question and answer sets.
Answer: Yes
Explanation:
This is a true and supported feature. The Azure Bot Service can be integrated with the Question Answering feature in the Azure AI Language service. This feature allows you to easily create a knowledge base by importing existing FAQ documents from URLs, files, or manually edited content, which the bot can then use to answer user queries.
Incorrect Option:
Azure Bot Service and Azure Cognitive Services can be integrated.
Selecting "No" would be incorrect, as integration with Cognitive Services is a fundamental and documented capability for enhancing a bot's intelligence.
Azure Bot Service engages with customers in a conversational manner.
Selecting "No" would be incorrect, as this is the core definition and function of the service.
Azure Bot Service can import frequently asked questions (FAQ) to question and answer sets.
Selecting "No" would be incorrect, as importing FAQs to build a QnA knowledge base is a standard and well-documented process for creating a support bot.
Reference:
What is the Azure Bot Service?
Create a question answering bot with Azure Bot Service
Match the Azure Al service to the appropriate actions.
To answer, drag the appropriate service from the column on the left to its action on the right
Each service may be used once, more than once, or not at all.
NOTE: Each correct match is worth one point.
Summary:
This question tests your ability to match core Azure AI services to their primary functions. The key is to distinguish between services that process and understand spoken language, written language, and the specific task of intent recognition within a conversational context.
Correct Matches:
Convert spoken requests into text.
Service: ii. Azure AI Speech
Explanation: This is the core function of the Speech-to-Text capability within Azure AI Speech. It is specifically designed to accurately transcribe spoken audio (a "spoken request") into written text.
Identify the intent of a user’s requests.
Service: i. Azure AI Language
Explanation: This is the primary function of the Conversational Language Understanding (CLU) feature within Azure AI Language. CLU is designed to take a user's input (from text or transcribed speech) and determine their goal or intent, such as "BookFlight" or "CheckBalance."
Apply intent to entities and utterances.
Service: i. Azure AI Language
Explanation: This describes the process of building and training a CLU model. You define the intents (what the user wants to do), entities (key data points like dates or locations), and provide example utterances (phrases a user might say). The Azure AI Language service then learns to apply the correct intent and extract the relevant entities from new user utterances.
Incorrect Service:
iii. Azure AI Translator
This service was not used. Azure AI Translator is used for converting text from one language to another. It does not handle speech-to-text conversion or intent/entity recognition.
Reference:
What is Azure AI Speech? - Speech-to-text
What is Conversational Language Understanding?
Which two scenarios are examples of a conversational AI workload? Each correct answer
presents a complete solution.
NOTE: Each correct selection is worth one point.
A. a smart device in the home that responds to questions such as “What will the weather be like today?”
B. a website that uses a knowledge base to interactively respond to users’ questions
C. assembly line machinery that autonomously inserts headlamps into cars
D. monitoring the temperature of machinery to turn on a fan when the temperature reaches a specific Threshold
B. a website that uses a knowledge base to interactively respond to users’ questions
Summary:
A conversational AI workload involves a system that uses natural language processing (NLP) to interact with humans through dialogue. The core function is to understand spoken or written language, process the intent, and provide a relevant, conversational response. This distinguishes it from robotic automation or simple sensor-based triggers.
Correct Option:
A. a smart device in the home that responds to questions such as “What will the weather be like today?”
This is a classic example of conversational AI. The device uses speech recognition to convert the spoken question into text, natural language understanding to determine the user's intent (get a weather forecast), and speech synthesis to speak the answer back to the user in a conversational manner.
B. a website that uses a knowledge base to interactively respond to users’ questions
This is another key example, often implemented as a chatbot. The system uses NLP to interpret the user's text-based questions, queries a knowledge base to find the most relevant information, and then responds in a conversational, interactive style, mimicking a human support agent.
Incorrect Option:
C. assembly line machinery that autonomously inserts headlamps into cars
This is an example of robotic process automation (RPA) or physical robotics. While it is an intelligent automation workload, it does not involve any form of natural language conversation or interaction with a human. The machinery is performing a pre-programmed physical task.
D. monitoring the temperature of machinery to turn on a fan when the temperature reaches a specific threshold
This is an example of sensor-based automation or a simple control system. It involves a straightforward "if-then" rule with no element of language understanding, dialogue, or conversational interaction, which are the hallmarks of a conversational AI workload.
Reference:
What is a bot? - Azure Bot Service
What is Conversational Language Understanding? - Azure AI Language
What is an advantage of using a custom model in Form Recognizer?
A. Only a custom model can be deployed on-premises.
B. A custom model can be trained to recognize a variety of form types.
C. A custom model is less expensive than a prebuilt model.
D. A custom model always provides higher accuracy.
Summary:
Azure AI Document Intelligence (formerly Form Recognizer) offers both prebuilt models for common documents and custom models for unique scenarios. The primary advantage of a custom model is its ability to be tailored to recognize and extract specific data from form layouts and document types that the prebuilt models are not designed to handle, such as company-specific invoices, proprietary contracts, or custom application forms.
Correct Option:
B. A custom model can be trained to recognize a variety of form types.
This is the core advantage of a custom model. While prebuilt models are fixed for specific document types (like receipts, invoices, IDs), a custom model can be trained on your own labeled datasets to understand the unique structure, fields, and tables in virtually any document format your organization uses. This flexibility is essential for processing proprietary or non-standard forms.
Incorrect Option:
A. Only a custom model can be deployed on-premises.
This is incorrect. Both prebuilt and custom models are primarily cloud-based services. While Azure AI services offer some on-premises containers for certain scenarios, this capability is not exclusive to custom models and depends on the specific service offering, not the model type.
C. A custom model is less expensive than a prebuilt model.
This is generally false. Developing a custom model incurs costs for training and requires a significant investment of time and resources to prepare a large, accurately labeled dataset. Using a prebuilt model for a common task like processing a standard invoice is typically more cost-effective as there are no training costs.
D. A custom model always provides higher accuracy.
This is incorrect. A custom model's accuracy is highly dependent on the quality, quantity, and representativeness of the training data provided. For common document types like receipts, a prebuilt model, which is trained on a vast and diverse dataset, will likely achieve higher accuracy with zero effort. A custom model provides higher accuracy only for the specific, unique forms it was trained on.
Reference:
What is Azure AI Document Intelligence? - Custom models
To complete the sentence, select the appropriate option in the answer area.
Summary:
This question describes a core computer vision task that goes beyond simply identifying what is in an image. The key action is pinpointing the spatial location of a specific object by drawing a box around it. This task is distinct from classifying the entire image or reading text within it.
Correct Option:
object detection
Object detection is the correct answer. This technology is designed to identify and locate multiple instances of objects within an image. It does this by drawing bounding boxes around each detected object (e.g., cars, people, animals) and providing the coordinates of those boxes. The primary output is the "where" (location via bounding box) in addition to the "what" (object label).
Incorrect Option:
optical character recognition (OCR)
OCR is used to detect and read text within images. It is not used for identifying and locating general objects like vehicles. Its purpose is to convert textual elements into machine-encoded text.
image classification
Image classification assigns a single label to an entire image (e.g., "city street," "highway"). It does not provide any information about the location, number, or specific position of objects within that image. It answers "what is in this picture?" but not "where are the objects?"
image analysis
Image analysis is a very broad term that can encompass many techniques, including object detection and image classification. However, it is not the specific name for the task of returning a bounding box. "Object detection" is the precise term for that specific capability.
Reference:
What is computer vision? - Object Detection
For each of the following statements. select Yes if the statement is true. Otherwise, select
No. NOTE; Each correct selection is worth one point
Summary:
This question tests your understanding of the capabilities and requirements of the Azure Custom Vision service. It's crucial to know that Custom Vision is designed for building custom image analysis models (both classification and object detection) using your own data, and that it is intended for still images, not video.
Correct Option:
The Custom Vision service can be used to detect objects in an image.
Answer: Yes
Explanation:
This is true. Azure Custom Vision supports two types of projects: Image Classification (tagging images) and Object Detection. The Object Detection feature is specifically designed to identify and locate multiple objects within a single image by drawing bounding boxes around them.
The Custom Vision service requires that you provide your own data to train the model.
Answer: Yes
Explanation:
This is a fundamental characteristic of the service. Unlike pre-built vision services, Custom Vision is a platform for creating custom models tailored to your specific needs. This requires you to provide your own set of labeled images to train the model to recognize your unique objects or tags.
The Custom Vision service can be used to analyze video files.
Answer: No
Explanation:
This is false. Custom Vision is designed to analyze still images. To analyze video, you would need to use a different service, such as Azure AI Video Indexer, which can process video frames and may even integrate with Custom Vision for specific frame analysis, but Custom Vision itself does not natively ingest video files.
Incorrect Option:
The Custom Vision service can be used to detect objects in an image.
Selecting "No" would be incorrect, as object detection is a core, documented feature of the service.
The Custom Vision service requires that you provide your own data to train the model.
Selecting "No" would be incorrect, as the entire purpose of the service is to build custom models from user-provided data.
The Custom Vision service can be used to analyze video files.
Selecting "Yes" would be incorrect, as the service is built for image analysis, not direct video processing.
Reference:
What is Custom Vision? - Azure AI services
Object detection in Azure Custom Vision
During the process of Machine Learning, when should you review evaluation metrics?
A. After you clean the data.
B. Before you train a model.
C. Before you choose the type of model.
D. After you test a model on the validation data.
Summary:
Evaluation metrics are quantitative measures used to assess the performance and accuracy of a trained machine learning model. They are essential for understanding how well the model generalizes to new, unseen data and for comparing the performance of different models or tuning attempts.
Correct Option:
D. After you test a model on the validation data.
This is the correct and primary time to review evaluation metrics. After a model is trained, it must be evaluated on a separate dataset that it did not see during training (the validation or test set). Metrics like accuracy, precision, recall, and F1-score are calculated based on the model's predictions on this held-out data. This process tells you how well the model is likely to perform in the real world.
Incorrect Option:
A. After you clean the data.
Data cleaning is a preparatory step focused on handling missing values, correcting errors, and standardizing formats. Evaluation metrics require a trained model to make predictions, which does not exist at this stage. You cannot measure model performance before a model exists.
B. Before you train a model.
This is too early in the process. A model must be trained before its performance can be measured. Before training, you are preparing data and defining the problem, not evaluating a model's output.
C. Before you choose the type of model.
The initial choice of model type (e.g., decision tree, logistic regression) is often based on the problem type, data size, and other heuristics. While evaluation metrics from simple baseline models can inform later model selection, the formal and critical review of metrics happens after a candidate model has been trained and tested.
Reference:
Monitor and view run metrics in Azure Machine Learning
| Page 2 out of 27 Pages |
| AI-900 Practice Test |