Topic 3: Misc. Questions

You are developing a text processing solution.

You develop the following method.



You call the method by using the following code.

GetKeyPhrases(textAnalyticsClient, "the cat sat on the mat");

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.




Explanation:
The code uses ExtractKeyPhrases on the sentence "the cat sat on the mat". Key phrase extraction identifies important nouns and noun phrases, not all words. Stop words ("the", "on") and verbs ("sat") are typically excluded. The method returns only key phrase strings, not confidence scores.

Correct Answers:

Statement 1: The call will output key phrases from the input string to the console.
Yes – The method iterates through response.Value (key phrases) and writes each to the console using Console.WriteLine(). Assuming the API call succeeds and returns at least one key phrase, output will be produced.

Statement 2: The output will contain the following words: the, cat, sat, on, and mat.
No – Key phrase extraction does not return stop words ("the", "on") or verbs ("sat"). For this sentence, likely output is just "cat" and "mat" (or "cat" and "mat" separately). It will not output all five words as individual key phrases.

Statement 3: The output will contain the confidence level for key phrases.
No – The ExtractKeyPhrases method returns a KeyPhraseCollection containing only the key phrase strings. Confidence scores are not provided for key phrase extraction (unlike entity recognition or sentiment analysis). The code writes keyphrase directly, not any confidence value.

Reference:
Microsoft Learn: "Text Analytics – Key Phrase Extraction" – Returns key phrases as strings, no confidence scores. Stop words and common verbs are filtered out.

You have a blog that allows users to append feedback comments. Some of the feedback comments contain harmful content that includes discriminatory language.

You need to create a prototype of a solution that will detect the harmful content. The solution must minimize development effort.

Which two actions should you perform? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

A. Sign in to Content Safety Studio and select Moderate text content.

B. From the Azure portal, create an Azure Al Content Safety resource.

C. From the Azure portal, create an Azure OpenAI resource.

D. Sign in to Azure Al Foundry and select Safety + security.

E. Sign in to Content Safety Studio and select Protected material detection for text.

A.   Sign in to Content Safety Studio and select Moderate text content.
B.   From the Azure portal, create an Azure Al Content Safety resource.


Explanation:
To detect harmful content (discriminatory language) with minimal development effort, you need an Azure AI Content Safety resource and its testing interface. First, create the Content Safety resource, then use Content Safety Studio's "Moderate text content" feature to test and prototype detection without writing code.

Correct Options:

B. From the Azure portal, create an Azure AI Content Safety resource.
First, provision an Azure AI Content Safety resource in your subscription. This resource provides the API for text moderation, including detection of hate speech, discriminatory language, and other harmful content.

A. Sign in to Content Safety Studio and select Moderate text content.
After creating the resource, go to Content Safety Studio (content-safety.cognitive.azure.com). Select "Moderate text content" to test the service with sample comments. This allows rapid prototyping without coding.

Incorrect Options:

C. From the Azure portal, create an Azure OpenAI resource. – Azure OpenAI is for text generation, not content moderation. It does not have built-in detection of discriminatory language with minimal effort.

D. Sign in to Azure AI Foundry and select Safety + security. – AI Foundry is for managing AI models, not a direct prototyping tool for Content Safety moderation.

E. Sign in to Content Safety Studio and select Protected material detection for text. – Protected material detection identifies copyrighted content, not discriminatory or harmful language.

Reference:
Microsoft Learn: "Azure AI Content Safety – Quickstart" – Create resource, then use Studio for prototyping.

You have an app named App1 that uses a custom Azure Al Document Intelligence model to recognize contract documents. You need to ensure that the model supports an additional contract format. The solution must minimize development effort. What should you do?

A. Lower the confidence score threshold of App1.

B. Lower the accuracy threshold of App1.

C. Add the additional contract format to the existing training set. Retrain the model.

D. Create a new training set and add the additional contract format to the new training set.

E. Create and train a new custom model.

C.   Add the additional contract format to the existing training set. Retrain the model.

Explanation:
To support an additional contract format with minimal development effort, you should add the new format's labeled samples to the existing training dataset and retrain the model. This extends the existing model's capabilities without starting from scratch, preserving previous learning while incorporating the new format.

Correct Option:

C. Add the additional contract format to the existing training set. Retrain the model.
Document Intelligence custom models are iteratively improved. By adding labeled examples of the new contract format to your existing training set and retraining, the model learns to recognize both the original and new formats. This minimizes effort compared to creating a new model.

Incorrect Options:

A. Lower the confidence score threshold of App1. –
This changes the acceptance criteria for predictions but does not teach the model to recognize the new contract format. It may increase false positives.

B. Lower the accuracy threshold of App1. –
Similar to confidence threshold, this does not improve model capability; it only reduces quality standards.

D. Create a new training set and add the additional contract format to the new training set. –
This would require starting over and would not include the original contract format unless you also add those samples, duplicating effort.

E. Create and train a new custom model. –
This discards all previous training, requiring relabeling of original contracts plus new ones. This is higher effort than retraining the existing model.

Reference:
Microsoft Learn: "Document Intelligence – Improve a custom model" – Add new labeled data to existing training set and retrain.

You have an Azure subscription that contains an Azure Al Content Safety resource named CS1. You plan to build an app that will analyze user-gene rated documents and identify obscure offensive terms. You need to create a dictionary that will contain the offensive terms. The solution must minimize development effort. What should you use?

A. a text classifier

B. text moderation

C. language detection

D. a blacklist

D.   a blacklist

Explanation:
Content Safety allows you to create custom blocklists (blacklists) of terms to detect. For obscure offensive terms not covered by the default models, you can create a custom blacklist (blocklist) and add your specific terms. This requires no model training and minimizes development effort compared to building a custom classifier.

Correct Option:

D. a blacklist
Azure AI Content Safety supports custom blocklists (blacklists) where you can upload a list of terms to be detected as offensive. The API then checks input text against both the default model and your custom list. This is the minimal-effort solution for detecting obscure offensive terms.

Incorrect Options:

A. a text classifier –
Building a text classifier requires labeling data, training a model, and deployment. This is high effort compared to using a blacklist.

B. text moderation –
Text moderation is a capability (Content Safety or Content Moderator), not a specific solution for custom terms. While Content Moderator supports custom term lists, the question asks for "what should you use" – the blacklist/blocklist is the correct feature.

C. language detection –
Language detection identifies the language of the text. It does not detect offensive terms.

Reference:
Microsoft Learn: "Azure AI Content Safety – Custom blocklists" – Create blocklists of terms to detect offensive content.

You train an Azure Custom Vision object detection model to identify a company's products by using the Retail domain.

You plan to deploy the model as part of a mobile app for Android phones.

You need to prepare the model for deployment.

Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.




Explanation:
For mobile deployment (Android), the model must be in a compact domain (e.g., "General (compact)" or "Retail (compact)"). The Retail domain is not compact and cannot be exported. You must change the domain to a compact version, retrain the model, then export it (e.g., to TensorFlow Lite for Android).

Correct Option (in sequence):

Change the model domain.
First, change the project domain from "Retail" to a compact domain compatible with mobile export, such as "General (compact)" or "Retail (compact)". The domain determines export capabilities. This is done in the Custom Vision portal under Project Settings.

Retrain the model.
After changing the domain, retrain the model. Training adjusts the model architecture to the new compact domain. The model will now be optimized for size and speed, suitable for mobile deployment.

Export the model.
Once retrained, export the model to a format compatible with Android (e.g., TensorFlow Lite, ONNX, CoreML). The export option appears after training a compact domain model. Download the exported file for integration into the Android app.

Incorrect Option (not used in sequence):
Test the model. – Testing is optional for validation but not required for preparing the model for deployment. Export can be done without explicit testing.

Reference:
Microsoft Learn: "Custom Vision – Export models for mobile" – Compact domains support export; change domain → retrain → export.

You are building an app that will translate speech by using the Azure Al Language service.

You need configure the app to translate the speech from English to Italian.

How should you complete the code? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.




Explanation:
To configure speech translation from English to Italian, set the source language using speech_recognition_language (to "en-US") and add the target language using add_target_language (to "it-IT" or "it"). The add_target_language method adds Italian as a translation target.

Correct Options:

First blank (after speech_translation_config.): speech_recognition_language
This property sets the language of the incoming speech audio. For English, set it to "en-US". The speech recognizer will listen for English and convert it to text before translation.

Second blank (assigned value): "en-US" (implied from the code)
The value assigned to speech_recognition_language should be the locale code for English (United States).

Third blank (after speech_translation_config.): add_target_language
This method adds a target language for translation. For Italian, you would call add_target_language("it-IT") or add_target_language("it"). The method adds Italian as an output language.

Fourth blank (value for target language): "it" or "it-IT" (not explicitly shown in the answer area options, but implied)

Why Other Options Are Incorrect:

For the first blank:

add_target_language – This is for setting target languages, not the source language.

region – This is set in the SpeechTranslationConfig constructor, not as a property here.

voice_name – This specifies the voice for speech synthesis output, not the source language.

For the third blank:

set_speech_synthesis_output_format – This sets the audio output format (e.g., raw PCM, MP3), not the translation target.

speech_recognition_language – Already used for source language; cannot be used for target.

voice_name – Sets the voice for synthesis, not the translation language.

Reference:
Microsoft Learn: "Speech SDK – Translation recognizer" – Use speech_recognition_language for source, add_target_language for target(s).

You have an Azure subscription.

You are building a chatbot that will use an Azure OpenAI model.

You need to deploy the model.

Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.




Correct Option (in sequence):

Apply for access to Azure OpenAI.
Azure OpenAI requires an application and approval process. You must submit a request for access via the Azure Portal. Without access, you cannot create an Azure OpenAI resource or deploy models.

Provision an Azure OpenAI resource.
After access is granted, create an Azure OpenAI resource (or "Azure OpenAI account") in your subscription. This resource will host your model deployments and provide the endpoint and keys.

Deploy the GPT model.
Once the resource is provisioned, deploy a GPT model (e.g., GPT-3.5 Turbo, GPT-4) to the resource. This deployment makes the model available for your chatbot to call via the API.

Incorrect Options (not used in sequence):

Deploy the embeddings model. – Embedding models (e.g., text-embedding-ada-002) are used for similarity search and vectorization, not for chatbot conversation generation. The chatbot needs a GPT model, not an embeddings model.

Provision Azure API Management. – API Management is not required for deploying or using Azure OpenAI. It is an optional gateway for API governance, not part of the core deployment sequence.

Deploy the DALL-E model. – DALL-E is for image generation, not text-based chatbot conversations. This is not the correct model for a chatbot.

Reference:
Microsoft Learn: "Azure OpenAI Service – How to request access" – Apply for access first.

You have an Azure subscription that contains an Azure OpenAI resource named AH.

You need to analyze an image to obtain a text descnption.

Which four actions should you perform in sequence from Azure OpenAI Studio? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.




Correct Option (in sequence):

Create a new deployment, select a GPT-4 model, and set Model version to vision-preview.
First, deploy the GPT-4 model with the vision-preview version. This model variant supports image understanding (vision capabilities), allowing the model to analyze uploaded images.

Open Chat playground and select the deployed model.
After deployment, open the Chat playground (not Completions playground, as vision capabilities are available in the chat interface). Select the GPT-4 vision-preview deployment you just created.

In the System message field, enter "You are an AI assistant that describes images."
Set a system message to instruct the model to focus on image description. This guides the model's behavior and improves response quality.

In the Chat session pane, enter a text prompt of "Describe this image", and upload an image by using the attachment button.
Finally, in the chat session, type the prompt "Describe this image" and use the attachment button (paperclip icon) to upload the image. The model will then generate a text description.

Incorrect Options (not used in sequence):

Open Completions playground and select the deployed model. – The Completions playground does not support image attachments. Vision capabilities are available only in the Chat playground.

Create a new deployment and select a DALL-E model. – DALL-E generates images from text, not the reverse. You need image-to-text (vision), not text-to-image.

Create a new deployment, select a text-embedding-ada-002 model. – Embedding models convert text to vectors for similarity search. They do not analyze images or generate descriptions.

Reference:

Microsoft Learn: "Azure OpenAI GPT-4 vision-preview" – Enables image analysis and description generation.

You need to measure the public perception of your brand on social media by using natural language processing. Which Azure service should you use?

A. Language service

B. Content Moderator

C. Computer Vision

D. Form Recognizer

A.   Language service

Summary:
To measure public brand perception on social media, you need a service capable of performing Sentiment Analysis and Opinion Mining. This involves processing unstructured text to determine a positive, negative, or neutral sentiment, and even identifying specific opinions about different aspects of your brand mentioned in the text. The Azure service specifically designed for these advanced natural language processing tasks is the Language service.

Correct Option:

A. Language service:
This is the correct choice. The Azure Language service, specifically its Sentiment Analysis and Opinion Mining feature, is designed for this exact purpose. It analyzes text to provide a sentiment score (e.g., positive, negative, neutral) and can perform granular "opinion mining" to identify how people feel about specific attributes of your brand (e.g., "the battery life is great, but the price is too high").

Incorrect Options:

B. Content Moderator:
This service is focused on detecting potentially offensive, unwanted, or risky content. It is used to filter out profanity, personally identifiable information (PII), or inappropriate images. It is not designed to perform nuanced sentiment analysis to gauge overall public perception.

C. Computer Vision:
This service is built to analyze and extract information from visual content, such as images and videos. Its capabilities include object detection, optical character recognition (OCR), and image description. It does not process textual content from social media posts for sentiment.

D. Form Recognizer:
This service is a specialized tool for document intelligence. It uses OCR and machine learning to automatically extract text, key-value pairs, and tables from structured documents like forms, invoices, and receipts. It is not suited for analyzing unstructured social media text for sentiment.

Reference:
Microsoft Official Documentation: What is the Azure Language service?

You are developing a solution for the Management-Bookkeepers group to meet the document processing requirements. The solution must contain the following components:

✑ A From Recognizer resource
✑ An Azure web app that hosts the Form Recognizer sample labeling tool


The Management-Bookkeepers group needs to create a custom table extractor by using the sample labeling tool. Which three actions should the Management-Bookkeepers group perform in sequence? To answer, move the appropriate cmdlets from the list of cmdlets to the answer area and arrange them in the correct order. Select and Place:



Summary:
To create a custom table extractor using the Form Recognizer sample labeling tool, the group must first create a project and load the sample documents that will be used for training. Next, they must manually label the data within these documents, specifically drawing bounding boxes around the tables and columns they want the model to learn. Finally, with the labeled data prepared, they can initiate the training process to build the custom model.

Correct Option & Sequence:
The three actions should be performed in the following sequence:

Create a new project and load sample documents

Label the sample documents

Train a custom model

Explanation of the Correct Sequence:

Create a new project and load sample documents:
This is the foundational first step. The Form Recognizer Studio or sample labeling tool requires a project to be created, which connects to your Azure Blob Storage container where the sample documents are stored. Without a project and source data, no further actions can be taken.

Label the sample documents:
After the documents are loaded into the project, the next critical step is to provide ground truth labels. For a custom table extractor, this involves manually drawing bounding boxes around the tables, rows, and columns on the sample documents to teach the model what to extract.

Train a custom model:
Once a sufficient number of documents (typically 5 or more) have been labeled, the training process can begin. The tool uses these labeled documents to build a machine learning model that can automatically identify and extract tables from new, unseen documents that have a similar structure.

Incorrect Option:

Create a composite model:
This action is not part of the core sequence for building a single custom model from scratch. A composite model is an advanced feature used to combine multiple pre-existing custom models into a single endpoint. It is performed after individual custom models have already been trained and is not a prerequisite for creating the initial custom table extractor.

Reference:
Microsoft Official Documentation: Build a custom model

Page 9 out of 35 Pages