Topic 3: Misc. Questions
You have an Azure Cognitive Services model named Model that identifies the intent of text input.
You develop an app in C# named App1.
You need to configure App1 to use Model1.
Which package should you add to App1?
A. Azure.AI.Language.Conversations
B. SpeechServicesToolkit
C. Universal.Microsoft.CognitiveServices.Speech
D. Xamarin.Cognitive.Speech
Correct Option (based on actual Azure SDK):
A. Azure.AI.Language.Conversations
This package provides the client library for Azure AI Language conversational language understanding (CLU), which identifies intents and extracts entities from text input. It is the correct package for intent recognition models.
Why Other Options Are Incorrect:
B. SpeechServicesToolkit –
Not a standard Azure SDK package. Speech-related functionality is in Microsoft.CognitiveServices.Speech.
C. Universal.Microsoft.CognitiveServices.Speech –
This is not a valid package name. The correct Speech SDK package is Microsoft.CognitiveServices.Speech.
D. Xamarin.Cognitive.Speech –
This is a legacy or third-party package for Xamarin mobile apps. Not the standard Azure SDK for intent recognition.
Reference:
Microsoft Learn: "Azure AI Language – Conversational Language Understanding" – Use Azure.AI.Language.Conversations NuGet package.
You have an Azure subscription.
You need to build an app that will compare documents for semantic similarity. The solution must meet the following requirements:
• Return numeric vectors that represent the tokens of each document.
• Minimize development effort.
Which Azure OpenAI model should you use?
A. GPT-3.5
B. embeddings
C. DALL-E
D. GPT-4
Explanation:
To compare documents for semantic similarity, you need vector embeddings of the text. The embeddings model (text-embedding-ada-002) converts text into numeric vectors that capture semantic meaning. You can then compute cosine similarity between vectors to measure document similarity. This minimizes development effort as the model is pre-trained and requires no fine-tuning.
Correct Option:
B. embeddings
The Azure OpenAI embeddings model (text-embedding-ada-002) returns numeric vectors (arrays of floating-point numbers) representing the input text. These vectors capture semantic meaning, allowing you to compute similarity between documents using cosine similarity or dot product. This is the correct model for this task.
Incorrect Options:
A. GPT-3.5 –
GPT-3.5 is a generative model (text completion/chat). It does not output numeric vectors for semantic similarity. While you could prompt it to compare documents, that would be slower, more expensive, and less accurate than using embeddings.
C. DALL-E –
DALL-E is for generating images from text descriptions. It does not process text documents or output numeric vectors for similarity comparison.
D. GPT-4 –
Similar to GPT-3.5, GPT-4 is a generative model. It is not designed to output embeddings vectors. Using it for similarity would be inefficient and not the minimal effort solution.
Reference:
Microsoft Learn: "Azure OpenAI embeddings" – Use text-embedding-ada-002 to generate vectors for semantic similarity.
You are building an agent that will retrieve the current time at a given location by using a custom API.
You need to test the functionality of the custom API.
How should you complete the command? To answer, select the appropriate options in the
answer area.
NOTE: Each correct selection is worth one point.

Explanation:
The request is creating an agent (or assistant) with a function tool definition. The agent will use the get_current_time function to answer location-based time queries. The correct path is /assistants (creating an assistant), and the function definition should be placed under the tools array, with type: "function".
Correct Options:
First blank (after the base URL): assistants
The endpoint /assistants is used to create or manage assistants (agents) in Azure AI Agent Service. The request body defines the assistant's instructions, model, and tools (functions). /completions is for direct chat completions without persistent assistants; /embeddings is for vector embeddings; /threads is for conversation threads (but creating an assistant first is required).
Second blank (in the JSON body, after tools=[): {
The function definition is an object with properties type and function. The structure should be tools: [ { "type": "function", "function": { ... } } ]. The { opens the first tool definition object. The content, functions, and tool_resources arrays are not correct in this context.
Why Other Options Are Incorrect:
First blank alternatives:
completions – Direct completion endpoint; does not support creating assistants with persistent tools/functions.
embeddings – For generating vector embeddings, not assistants.
threads – For creating conversation threads, but an assistant must exist first; this call creates the assistant.
Second blank alternatives:
content – Not used for tool definition; content is for messages.
functions – Legacy approach; current API uses tools with type: "function".
tool_resources – For resources like code interpreter or file search, not for custom function definitions.
Reference:
Microsoft Learn: "Azure AI Agent Service – Create assistant" – POST /assistants with tools array containing function definitions.
You have an Azure subscription that contains an Azure Al Language resource named Resource1.
You run the following cURL command, and then play the Output.mp3 file.
For each of the following statements, select Yes if the statement is true. Otherwise, select
No.
NOTE: Each correct selection is worth point.

Explanation:
The cURL command sends an SSML (Speech Synthesis Markup Language) request to the Text-to-Speech API. The SSML defines three voice elements: JennyNeural (US female), RyanNeural (UK male), and ChristopherNeural (US male with "advertisement_upbeat" style). You will hear three distinct sentences, each in a different voice. Accents differ (US vs UK). The third sentence is not neutral; it uses an upbeat style.
Correct Answers:
Statement 1: You hear three sentences in different voices.
Yes – The SSML uses three different voice names: en-US-JennyNeural (female), en-GB-RyanNeural (male), and en-US-ChristopherNeural (male). Each voice is distinct, so you will hear three sentences in three different voices.
Statement 2: You hear three sentences in different accents.
Yes – JennyNeural and ChristopherNeural speak with US English accents (en-US). RyanNeural speaks with a UK English accent (en-GB). Thus, you hear both US and UK accents, so accents are different across the sentences.
Statement 3: You hear three sentences expressed in a neutral tone.
No – The third sentence uses
Reference:
Microsoft Learn: "Text-to-Speech SSML" – Supports multiple voices, accents, and expressive styles.
You are building an app that will automatically translate speech from English to French, German, and Spanish by using Azure Al service.
You need to define the output languages and configure the Azure Al Speech service.
How should you complete the code? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Explanation:
To translate speech from English to multiple target languages, you set the source language using SpeechRecognitionLanguage (not shown but implied) and add target languages using AddTargetLanguage. The target languages should be specified using language codes ("fr", "de", "es"), not display names. The recognizer should be TranslationRecognizer.
Correct Options:
For the languages list (first blank): ["fr", "de", "es"]
The AddTargetLanguage method expects language codes (e.g., "fr" for French, "de" for German, "es" for Spanish). This list correctly specifies the three target languages.
For the recognizer (second blank): TranslationRecognizer
TranslationRecognizer is the correct class for speech translation. It takes a SpeechTranslationConfig and an AudioConfig, and returns translated text in the target languages.
Why Other Options Are Incorrect:
Languages list alternatives:
["en-GB"] – This is English (source language), not target languages.
["en", "fr", "de", "es"] – Includes English as a target language, which is unnecessary.
["French", "German", "Spanish"] – Uses display names instead of language codes; this will cause errors.
Recognizer alternatives:
IntentRecognizer – Used for LUIS intent recognition, not translation.
SpeakerRecognizer – Does not exist in Speech SDK.
SpeechSynthesizer –Used for text-to-speech, not translation.
Reference:
Microsoft Learn: "Speech Translation – AddTargetLanguage" – Accepts language codes like "fr", "de", "es".
You have an Azure subscription.
You need to deploy an Azure Al Search resource that will recognize geographic locations.
Which built-in skill should you include in the skillset for the resource?
A. AzureOpenAIEmbeddingSkill
B. Document Extract ionSkill
C. EntityLinkingSkill
D. EntityRecognitionSkill
Explanation:
To recognize geographic locations (e.g., cities, countries, states) in text, you need a built-in skill for entity recognition. The EntityRecognitionSkill in Azure Cognitive Search can identify categories like Location, Organization, Person, Quantity, etc. For geographic locations specifically, this skill extracts location entities from text.
Correct Option:
D. EntityRecognitionSkill
The EntityRecognitionSkill (part of Cognitive Search skillset) uses the Text Analytics API to recognize entities including Location, Person, Organization, DateTime, Quantity, etc. It identifies geographic locations such as "Seattle", "France", or "Mount Everest" in your indexed content.
Incorrect Options:
A. AzureOpenAIEmbeddingSkill –
This skill generates vector embeddings for text using Azure OpenAI models. It does not directly recognize geographic locations; it creates embeddings for similarity search.
B. DocumentExtractionSkill –
This skill extracts content from files (e.g., PDF, Word, PowerPoint) as part of the indexing pipeline. It does not perform entity recognition.
C. EntityLinkingSkill –
This skill links recognized entities to a knowledge base (Wikipedia). While it can recognize locations, it requires the EntityRecognitionSkill as a prerequisite. The question asks for the skill that recognizes geographic locations; EntityLinkingSkill links them, but the core recognition is done by EntityRecognitionSkill.
Reference:
Microsoft Learn: "EntityRecognitionSkill in Azure Cognitive Search" – Recognizes entities including Location, Person, Organization.
You have an Azure subscription that contain an Azure OpenAI resource named AI1.
You build a chatbot that uses AI1 to provide generation answers to specific questions.
You need to ensure that the chatbot checks all input output for objectionable content.
Which types of resource should you create first?
A. Azure Machine Learning
B. Log Analytics
C. Azure AI Content Safety
D. Microsoft Defender Threat intelligence (Defender TI)
Explanation:
To check input and output for objectionable content (hate speech, sexual content, violence, self-harm), you need a dedicated content moderation service. Azure AI Content Safety provides text and image moderation APIs that can analyze prompts and responses before they are shown to users. This should be created first and integrated into the chatbot flow.
Correct Option:
C. Azure AI Content Safety
Azure AI Content Safety is specifically designed to detect objectionable content across four severity categories (hate, sexual, violence, self-harm). It can be used to moderate both user inputs (prompts) and model outputs (responses) from Azure OpenAI, ensuring responsible AI usage.
Incorrect Options:
A. Azure Machine Learning –
AML is for building, training, and deploying custom ML models. It does not provide pre-built content moderation. Using AML would require building a custom solution, increasing effort.
B. Log Analytics –
Log Analytics is for collecting and querying log data (monitoring, diagnostics). It does not perform content moderation on live request/response traffic.
D. Microsoft Defender Threat Intelligence (Defender TI) –
Defender TI is for threat intelligence (malware, phishing, threat actors). It is not designed for detecting objectionable content in chatbot inputs/outputs.
Reference:
Microsoft Learn: "Azure AI Content Safety" – Use for moderating text and images in generative AI applications.
You have a chatbot.
You need to test the bot by using the Bot Framework Emulator. The solution must ensure that you are prompted for credentials when you sign in to the bot.
Which three settings should you configure? To answer, select the appropriate settings in the answer area.
NOTE Each correct selection is worth one point.

Explanation:
To be prompted for credentials when connecting to a bot via Bot Framework Emulator, you need to configure the emulator to use a bot URL that requires authentication (e.g., a bot hosted in Azure with OAuth). The settings shown include Path to ngrok (for tunneling to remote bots) and Bypass ngrok for local addresses (affects how the emulator connects). However, the specific settings for credential prompts are typically in the "Open Bot" dialog, not shown in this settings panel.
Looking at the provided settings panel, the relevant configuration for remote bot connections with authentication would involve:
Path to ngrok – Required to tunnel to a remotely hosted bot (e.g., in Azure) that requires authentication.
Bypass ngrok for local addresses – Ensure this is unchecked so ngrok is used for remote connections.
Run ngrok when the Emulator starts up – Enable this to automatically start the ngrok tunnel.
These three settings ensure the emulator can securely connect to a remote bot that requires credential prompts.
Correct Options (three settings):
Path to ngrok – Set the path to the ngrok executable. ngrok is required to tunnel to a bot hosted remotely (e.g., Azure App Service) that uses authentication.
Bypass ngrok for local addresses – Uncheck this option (or ensure it is not bypassed) so that ngrok is used for remote connections, enabling proper authentication flow.
Run ngrok when the Emulator starts up – Enable this so ngrok automatically starts, ensuring the tunnel is ready for remote bot connections that require credential prompts.
Reference:
Microsoft Learn: "Bot Framework Emulator – Connecting to a remote bot" – Use ngrok to tunnel to Azure-hosted bots with authentication.
You are processing text by using the Azure AI Language service.
You need to identify music band names in the text. The solution must minimize development effort.
What should you use?
A. Key phrase extraction
B. Conversational Language Understanding (CLU)
C. Entity linking
D. Custom named entity recognition (NER)
Explanation:
Music band names are well-known entities that can be linked to Wikipedia. Entity linking in Azure AI Language identifies named entities and links them to a knowledge base (Wikipedia). For common band names (e.g., "The Beatles", "Queen"), this works out-of-the-box without training, minimizing development effort.
Correct Option:
C. Entity linking
Entity linking disambiguates and links recognized entities to Wikipedia articles. Music band names are part of the pre-built entity catalog. The API returns a unique Wikipedia URL for each recognized band name, requiring no custom training.
Incorrect Options:
A. Key phrase extraction –
Extracts important topics and concepts but does not specifically identify band names or link them to a knowledge base. A band name might appear as a key phrase, but without disambiguation or linking.
B. Conversational Language Understanding (CLU) –
CLU requires custom training with intents and entities. It is designed for conversational agents, not for generic band name recognition. This increases development effort.
D. Custom named entity recognition (NER) –
Custom NER requires labeling training examples of band names. This is high effort compared to using pre-built entity linking.
Reference:
Microsoft Learn: "Entity linking in Azure AI Language" – Links entities to Wikipedia, including music bands, people, places, and organizations.
You have an Azure subscription that contains an Azure OpenAI resource.
You deploy the GPT-4 model to the resource.
You need to ensure that you can upload files that will be used as grounding data for the model.
Which two types of resources should you create? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
A. Azure Al Bot Service
B. Azure SQL
C. Azure Al Document Intelligence
D. Azure Blob Storage
E. Azure Al Search
E. Azure Al Search
Explanation:
To use grounding data (your own documents) with Azure OpenAI's "Add your data" feature, you need two resources: Azure Blob Storage to store the files (documents, PDFs, text files), and Azure AI Search (Cognitive Search) to index the content and enable retrieval-augmented generation (RAG). The search service handles vector and keyword search over your data.
Correct Options:
D. Azure Blob Storage
Blob Storage is used to store the actual document files (PDFs, Word docs, text files, etc.) that will serve as grounding data. The data is ingested from Blob Storage into the search index.
E. Azure AI Search
Azure Cognitive Search (AI Search) indexes the content from Blob Storage, enabling efficient retrieval of relevant document chunks when querying the GPT-4 model. This is required for the "Add your data" feature in Azure OpenAI Studio.
Incorrect Options:
A. Azure AI Bot Service –
Bot Service is for building and deploying chatbots, not for storing or indexing grounding data for Azure OpenAI.
B. Azure SQL –
While you could use SQL as a data source, the standard "Add your data" integration in Azure OpenAI Studio works with Blob Storage + Cognitive Search. SQL is not a direct option without custom code.
C. Azure AI Document Intelligence –
Document Intelligence extracts structured data from forms and documents. It is not required for grounding data, though it could be used as a preprocessing step.
Reference:
Microsoft Learn: "Azure OpenAI – Add your data" – Requires Azure Blob Storage (data source) and Azure Cognitive Search (index).
| Page 5 out of 35 Pages |