Topic 3: Misc. Questions
You have an Azure subscription that contains an Azure OpenAl resource. Multiple different models are deployed to the resource.
You are building a chatbot by using Chat playground in Azure Al Studio.
You need to ensure that the chatbot generates text in concise formal business language.
The solution must meet the following requirements:
• Reduce the cost of running the language model.
• Maintain the size of the chatbot history window.
Which two settings should you configure? To answer, select the appropriate settings in the answer area. NOTE: Each correct selection is worth one point.

Explanation:
To generate concise formal business language, you modify the system message to instruct the model accordingly. To reduce cost and maintain history window size, you reduce Max response tokens (limits output length) and adjust Temperature to a lower value (for more focused, less creative responses). Lower token usage reduces cost.
Correct Options (from typical Chat playground settings):
1. System message – Change "You are an AI assistant that helps people find information" to something like: "You are a business assistant that provides concise, formal responses in a professional tone. Keep answers brief and to the point."
2. Max response tokens – Reduce this value (e.g., from high default to 150-300 tokens). This limits the length of each response, reducing token usage (cost) and keeping output concise.
3. Temperature – Set to a lower value (e.g., 0.2-0.5). Lower temperature makes output more deterministic, focused, and less creative/rambling, which aligns with formal business language.
Why These Meet Requirements:
Concise formal language – Achieved via system message instruction and low temperature.
Reduce cost – Achieved by reducing max response tokens (fewer tokens generated = lower cost).
Maintain history window size – Reducing output tokens leaves more space in the context window for conversation history (input tokens are not reduced, but output tokens are constrained).
Reference:
Microsoft Learn: "Azure OpenAI – System messages" – Guide model behavior and tone.
You need to measure the public perception of your brand on social media messages. Which Azure Cognitive Services service should you use?
A. Text Analytics
B. Content Moderator
C. Computer Vision
D. Form Recognizer
Explanation:
To measure public perception (positive, negative, neutral) of your brand from social media messages, you need sentiment analysis. Azure AI Language (formerly Text Analytics) provides sentiment analysis that returns confidence scores for positive, negative, neutral, and mixed sentiments. This is the correct service for this task.
Correct Option:
A. Text Analytics
Text Analytics (now part of Azure AI Language) includes sentiment analysis, which evaluates text and returns sentiment labels and confidence scores. It is specifically designed to measure opinions, attitudes, and emotions expressed in text, making it ideal for brand perception analysis from social media messages.
Incorrect Options:
B. Content Moderator – Detects profanity, offensive content, and adult/racy content. It does not measure sentiment or public perception (positive/negative).
C. Computer Vision – Analyzes images for tags, objects, faces, and adult content. It cannot process text sentiment from social media messages.
D. Form Recognizer – Extracts structured data from forms and documents (invoices, receipts). Not relevant for sentiment analysis.
Reference:
Microsoft Learn: "Text Analytics – Sentiment Analysis" – Determines positive, negative, neutral, and mixed sentiment in text.
You are building a language learning solution.
You need to recommend which Azure services can be used to perform the following tasks:
• Analyze lesson plans submitted by teachers and extract key fields, such as lesson times and required texts.
• Analyze learning content and provide students with pictures that represent commonly used words or phrases in the text
The solution must minimize development effort.
Which Azure service should you recommend for each task? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Explanation:
For extracting structured fields (lesson times, required texts) from lesson plans, Azure AI Document Intelligence provides pre-built models for forms and documents. For providing pictures representing words/phrases in text, Azure AI Custom Vision can be used to train a model to associate images with words, but that requires training. A better fit would be Azure AI Vision for image tagging, but since it's not an option, Immersive Reader is for readability, not picture association.
Given the options, the answer key likely expects:
Lesson plans: Azure AI Document Intelligence (extracts key fields)
Learning content: Azure AI Custom Vision (custom image-word associations)
Correct Options:
First task (Analyze lesson plans): Azure AI Document Intelligence
Document Intelligence extracts key-value pairs, tables, and structured fields from documents. For lesson plans, you can train a custom model or use pre-built layout to extract lesson times and required texts. This minimizes development effort compared to building custom OCR+parsing.
Second task (Provide pictures for words/phrases): Azure AI Custom Vision
Custom Vision allows you to train a classifier that maps words to images (e.g., "dog" → picture of a dog). After training, the app can query the model to retrieve the appropriate image for a given word or phrase. This minimizes effort compared to building a custom image retrieval system.
Why Other Options Are Incorrect:
First task alternatives:
Azure AI Search – Indexes and searches content but does not extract structured fields from documents.
Azure AI Custom Vision – For image classification, not document field extraction.
Immersive Reader – Improves text readability, does not extract fields.
Second task alternatives:
Azure AI Search – Can store and retrieve images but does not automatically associate words with pictures.
Azure AI Document Intelligence – For document extraction, not image-word association.
Immersive Reader – Provides text-to-speech and translation, not picture representation.
Reference:
Microsoft Learn: "Document Intelligence – Custom models" – Extract key fields from documents.
Microsoft Learn: "Custom Vision – Image classification" – Train models to associate images with labels (words/phrases).
You have an Azure subscription that contains an Azure OpenAl resource named All and ari Azure Al Content Safety resource named CS1.
You build a chatbot that uses All to provide generative answers to specific questions and CS1 to check input and output for objectionable content
You need to optimize the content filter configurations by running tests on sample questions.
Solution: From Content Safety Studio, you use the Moderate text content feature to run the tests.
Does this meet the requirement?
A. Yes
B. No
Explanation:
Content Safety Studio's Moderate text content feature allows you to input sample text, test content moderation settings, and see results (hate, sexual, violence, self-harm categories with severity levels). This is the correct tool for running tests on sample questions to optimize content filter configurations before deploying to production.
Correct Option:
A. Yes
The "Moderate text content" feature in Content Safety Studio provides an interactive testing environment. You can input sample questions, see the moderation results, and adjust thresholds or blocklists. This allows you to optimize content filter configurations based on test outcomes, meeting the requirement.
Why This Is Correct:
Moderate text content – Designed for testing text against content safety categories.
Sample questions – You can paste any sample text and immediately see the analysis.
Optimize configurations – Test different thresholds, blocklists, and categories iteratively.
Reference:
Microsoft Learn: "Content Safety Studio – Moderate text content" – Test text moderation interactively.
You are building an app that uses a Language Understanding model to analyze text files.
You need to ensure that the app can detect the following entities:
• Temperatures
• Currency values
• Email addresses
• Telephone numbers
The solution must minimize development effort.
Which model capability should you use?
A. list entities
B. learned entities
C. utterances
D. regular expression components
E. pre-built entity components
Explanation:
Temperatures, currency values, email addresses, and telephone numbers are common data types that follow predictable patterns. Azure AI Language provides pre-built entity components that recognize these entities out-of-the-box without training. Using pre-built entities minimizes development effort compared to custom regex or learned entities.
Correct Option:
E. pre-built entity components
Pre-built entities (e.g., Temperature, Currency, Email, PhoneNumber) are ready-to-use recognizers that detect common data types. You simply enable them in your Language Understanding model; no training or labeling is required. This minimizes development effort significantly.
Incorrect Options:
A. list entities –
List entities require manually defining all possible values (e.g., ["hot", "warm", "cold"]). They are impractical for temperatures, currency values, emails, or phone numbers because these have infinite or pattern-based variations.
B. learned entities –
Learned (machine-learned) entities require labeling examples in training data. This increases development effort and is unnecessary when pre-built entities exist.
C. utterances –
Utterances are example phrases for training intents, not for entity detection. They are not a model capability for entity recognition.
D. regular expression components –
Regex entities can detect patterns (e.g., email regex), but you must write and maintain the regex patterns for each entity type. Pre-built entities are easier and more robust.
Reference:
Microsoft Learn: "Pre-built entities in Language Understanding" – Includes Temperature, Currency, Email, PhoneNumber, etc.
You are building an app that will provide users with definitions of common AJ terms.
You create the following C# code.
For each of the following statements, select Yes if the statement is true. Otherwise, select
No.
NOTE: Each correct selection is worth point.

Explanation:
The code uses Azure OpenAI with a system message "You are a helpful assistant." and a user prompt "What is an LLM?". The model will likely return a definition of LLM (Large Language Model). However, there is no "high degree of certainty" guarantee. Changing the prompt to be more specific ("in the context of AI models") helps. Changing the system message to restrict context ("only within AI language models") also increases likelihood of relevant responses.
Correct Answers:
Statement 1: The response will contain an explanation of large language models (LLMs) that has a high degree of certainty.
No – The model will likely provide a definition of LLMs, but there is no guarantee of "high degree of certainty." Generative models can produce varying responses, and certainty is not a measurable output. The statement overstates reliability.
Statement 2: Changing "what is an LLM?" to "what is an LLM in the context of AI models?" will produce the intended response.
Yes – Making the prompt more specific (adding "in the context of AI models") reduces ambiguity and increases the likelihood that the model provides the intended definition within the AI domain. This is a good prompt engineering practice.
Statement 3: Changing "You are a helpful assistant." to "You must answer only within the context of AI language models." will give a higher likelihood of producing the intended response.
Yes – Constraining the system message to limit the response context to "AI language models" focuses the model on the relevant domain, reducing the chance of off-topic or overly general answers. This improves the likelihood of the intended response.
Reference:
Microsoft Learn: "Azure OpenAI – Prompt engineering" – Specific prompts and constrained system messages improve response relevance.
You have an Azure subscription that contains an Azure Al Content Safety resource.
You are building a social media app that will enable users to share images.
You need to configure the app to moderate inappropriate content uploaded by the users.
How should you complete the code? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Explanation:
Azure AI Content Safety provides image moderation via the AnalyzeImage method. You need to instantiate a ContentSafetyClient with the endpoint and key, then call client.AnalyzeImage(request) where request is an AnalyzeImageOptions object containing the image data.
Correct Options:
First blank (after new): ContentSafetyClient
The ContentSafetyClient is the main client class for interacting with Azure AI Content Safety. It requires an endpoint URI and an AzureKeyCredential object for authentication.
Second blank (return statement): client.AnalyzeImage(request)
The AnalyzeImage method analyzes an image for objectionable content (hate, sexual, violence, self-harm). It accepts an AnalyzeImageOptions object containing the image (as a stream or URL) and returns an AnalyzeImageResult.
Why Other Options Are Incorrect:
First blank alternatives:
AnalyzeTextOptions – This is a request options class for text moderation, not for creating a client.
BlocklistClient – A client for managing custom blocklists, not for image moderation.
TextCategoriesAnalysis – This is a result type, not a client class.
Second blank alternatives:
AnalyzeImage(request) – Missing client. prefix; would not reference the client instance.
client.AnalyzeText(request) – For text moderation, not images.
request.AnalyzeImage(client) – Incorrect method invocation; AnalyzeImage is a method of the client, not the request.
Reference:
Microsoft Learn: "Azure AI Content Safety – Image moderation" – Use ContentSafetyClient.AnalyzeImage.
You have an app that uses the AI Language custom question answering service.
You need to ad alternatives for the word testing by using the Authoring API.
How should you complete the JSON payload? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Explanation:
In custom question answering, synonyms (alternate phrasings) are defined within the alterations array. Each alteration contains a phrases array (synonyms) where you list alternative words or phrases that should be treated as equivalent (e.g., "testing", "T-rials", "Evaluate"). The value field is not used; the correct structure is phrases.
Correct Options:
First blank (instead of "value"): phrases
The phrases array holds the list of synonyms/alternatives. For example, "phrases": ["testing", "T-rials", "Evaluate"]. This tells the question answering service that these words are equivalent.
Second blank (array name): phrases (or the array itself should contain the alternative words)
The array within phrases contains the synonym strings. The example shows "T-rials" and "Evaluate" as alternatives for "testing".
Why Other Options Are Incorrect:
"synonyms" – Not the correct property name. The API uses phrases within alterations.
"value" – Not a valid property for the Authoring API's synonym/alteration definition.
The nested structure with "value": [...] is incorrect.
Reference:
Microsoft Learn: "Custom question answering – Authoring API" – Use alterations array with phrases to define synonyms.
You are designing a conversational interface for an app that will be used to make vacation requests. The interface must gather the following data:
• The start date of a vacation
• The end date of a vacation
• The amount of required paid time off
The solution must minimize dialog complexity. Which type of dialog should you use?
A. Skill
B. waterfall
C. adaptive
D. component
Explanation:
A waterfall dialog in Bot Framework executes a sequence of steps in order. It is ideal for gathering multiple pieces of information (start date, end date, time off amount) in a predictable, linear flow. Waterfall dialogs are simpler to implement and understand compared to adaptive dialogs for straightforward sequential prompts.
Correct Option:
B. waterfall
Waterfall dialogs run a series of steps, each waiting for user input before proceeding to the next. This is perfect for collecting three data points in sequence (start date → end date → amount). The logic is simple and minimizes dialog complexity compared to more advanced patterns.
Incorrect Options:
A. Skill –
A skill is a bot that can be called by another bot (modular composition). It is not a dialog type for collecting sequential input within a single bot.
C. adaptive –
Adaptive dialogs are more powerful and flexible but also more complex, with event-driven actions and language generation. They are overkill for simple sequential data collection and introduce unnecessary complexity.
D. component –
Component dialogs are containers for grouping sub-dialogs. They do not define the flow logic themselves; they are used to organize other dialogs.
Reference:
Microsoft Learn: "Waterfall dialogs in Bot Framework" – Execute steps sequentially, ideal for gathering multiple inputs.
You develop an app in O named App1 that performs speech-to-speech translation.
You need to configure App1 to translate English to German.
How should you complete the speechTransiationConf ig object? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.

Explanation:
To configure speech-to-speech translation from English to German, you need to set the source language (what the user speaks) using speechRecognitionLanguage to "en-US", and add the target language (what to translate into) using addTargetLanguage with "de". The speechSynthesisLanguage is optional as it defaults to the target language.
Correct Options:
First blank (after translationConfig.): speechRecognitionLanguage
This property sets the language of the incoming speech audio. For English (US), set it to "en-US". The speech recognizer will listen for English and convert it to text before translation.
Second blank (assigned value): "en-US"
The value assigned to speechRecognitionLanguage is the locale code for English (United States). This matches the source language of the user's speech.
Third blank (after translationConfig.): addTargetLanguage
This method adds a target language for translation. For German, you call addTargetLanguage("de"). This adds German as an output language.
Fourth blank (value for target language): "de"
"de" is the locale code for German. This tells the translation service to translate the recognized English text into German. For speech-to-speech translation, the service will also synthesize German speech output.
Why Other Options Are Incorrect:
speechSynthesisLanguage – Would set the output speech language, but addTargetLanguage is preferred and automatically handles synthesis.
voiceName – Specifies a particular voice for speech synthesis, not the translation target language.
Reference:
Microsoft Learn: "Speech Translation – Configuration" – Set SpeechRecognitionLanguage for source, use AddTargetLanguage for target(s).
| Page 3 out of 35 Pages |