Win IT Exam with Last Dumps 2024


Microsoft AI-102 Exam

Page 19/25
Viewing Questions 181 190 out of 241 Questions
76.00%

Question 181
DRAG DROP -
You plan to build a chatbot to support task tracking.
You create a Language Understanding service named lu1.
You need to build a Language Understanding model to integrate into the chatbot. The solution must minimize development time to build the model.
Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. (Choose four.)
Select and Place:
AI-102_181Q.png related to the Microsoft AI-102 Exam
Image AI-102_181R.png related to the Microsoft AI-102 Exam



Step 1: Add a new application -
Create a new app -
1. Sign in to the LUIS portal with the URL of
https://www.luis.ai.
2. Select Create new app.
3. Etc.
Step 2: Add example utterances.
In order to classify an utterance, the intent needs examples of user utterances that should be classified with this intent.
Step 3: Train the application -
Step 4: Publish the application -
In order to receive a LUIS prediction in a chat bot or other client application, you need to publish the app to the prediction endpoint.
Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/luis/tutorial-intents-only

Question 182
DRAG DROP -
You plan to build a chatbot to support task tracking.
You create a Language Understanding service named lu1.
You need to build a Language Understanding model to integrate into the chatbot. The solution must minimize development time to build the model.
Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
AI-102_182Q.png related to the Microsoft AI-102 Exam
Image AI-102_182R.png related to the Microsoft AI-102 Exam



Step 1: Add a new application -
Create a new app -
1. Sign in to the LUIS portal with the URL of
https://www.luis.ai.
2. Select Create new app.
3. Etc.
Step 2: Add example utterances.
In order to classify an utterance, the intent needs examples of user utterances that should be classified with this intent.
Step 3: Train the application -
Step 4: Publish the application -
In order to receive a LUIS prediction in a chat bot or other client application, you need to publish the app to the prediction endpoint.
Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/luis/tutorial-intents-only

Question 183
DRAG DROP -
You are using a Language Understanding service to handle natural language input from the users of a web-based customer agent.
The users report that the agent frequently responds with the following generic response: "Sorry, I don't understand that."
You need to improve the ability of the agent to respond to requests.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. (Choose three.)
Select and Place:
AI-102_183Q.png related to the Microsoft AI-102 Exam
Image AI-102_183R.png related to the Microsoft AI-102 Exam



Step 1: Add prebuilt domain models as required.
Prebuilt models provide domains, intents, utterances, and entities. You can start your app with a prebuilt model or add a relevant model to your app later.
Note: Language Understanding (LUIS) provides prebuilt domains, which are pre-trained models of intents and entities that work together for domains or common categories of client applications.
The prebuilt domains are trained and ready to add to your LUIS app. The intents and entities of a prebuilt domain are fully customizable once you've added them to your app.
Step 2: Enable active learning -
To enable active learning, you must log user queries. This is accomplished by calling the endpoint query with the log=true querystring parameter and value.
Step 3: Train and republish the Language Understanding model
The process of reviewing endpoint utterances for correct predictions is called Active learning. Active learning captures endpoint queries and selects user's endpoint utterances that it is unsure of. You review these utterances to select the intent and mark entities for these real-world utterances. Accept these changes into your example utterances then train and publish. LUIS then identifies utterances more accurately.
Incorrect Answers:
Enable log collection by using Log Analytics
Application authors can choose to enable logging on the utterances that are sent to a published application. This is not done through Log Analytics.
Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-how-to-review-endpoint-utterances#log-user-queries-to-enable-active-learning
https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-concept-prebuilt-model

Question 184
You are building a bot on a local computer by using the Microsoft Bot Framework. The bot will use an existing Language Understanding model.
You need to translate the Language Understanding model locally by using the Bot Framework CLI.
What should you do first?



You might want to manage the translation and localization for the language understanding content for your bot independently.
Translate command in the @microsoft/bf-lu library takes advantage of the Microsoft text translation API to automatically machine translate .lu files to one or more than 60+ languages supported by the Microsoft text translation cognitive service.
What is translated?
An .lu file and optionally translate
Comments in the lu file -
LU reference link texts -
List of .lu files under a specific path.
Reference:
https://github.com/microsoft/botframework-cli/blob/main/packages/luis/docs/translate-command.md

Question 185
DRAG DROP -
You are using a Language Understanding service to handle natural language input from the users of a web-based customer agent.
The users report that the agent frequently responds with the following generic response: "Sorry, I don't understand that."
You need to improve the ability of the agent to respond to requests.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
AI-102_185Q.png related to the Microsoft AI-102 Exam
Image AI-102_185R.png related to the Microsoft AI-102 Exam



Step 1: Add prebuilt domain models as required.
Prebuilt models provide domains, intents, utterances, and entities. You can start your app with a prebuilt model or add a relevant model to your app later.
Note: Language Understanding (LUIS) provides prebuilt domains, which are pre-trained models of intents and entities that work together for domains or common categories of client applications.
The prebuilt domains are trained and ready to add to your LUIS app. The intents and entities of a prebuilt domain are fully customizable once you've added them to your app.
Step 2: Enable active learning -
To enable active learning, you must log user queries. This is accomplished by calling the endpoint query with the log=true querystring parameter and value.
Step 3: Train and republish the Language Understanding model
The process of reviewing endpoint utterances for correct predictions is called Active learning. Active learning captures endpoint queries and selects user's endpoint utterances that it is unsure of. You review these utterances to select the intent and mark entities for these real-world utterances. Accept these changes into your example utterances then train and publish. LUIS then identifies utterances more accurately.
Incorrect Answers:
Enable log collection by using Log Analytics
Application authors can choose to enable logging on the utterances that are sent to a published application. This is not done through Log Analytics.
Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-how-to-review-endpoint-utterances#log-user-queries-to-enable-active-learning
https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-concept-prebuilt-model


Question 186
You build a conversational bot named bot1.
You need to configure the bot to use a QnA Maker application.
From the Azure Portal, where can you find the information required by bot1 to connect to the QnA Maker application?



Obtain values to connect your bot to the knowledge base
1. In the QnA Maker site, select your knowledge base.
2. With your knowledge base open, select the SETTINGS tab. Record the value shown for service name. This value is useful for finding your knowledge base of interest when using the QnA Maker portal interface. It's not used to connect your bot app to this knowledge base.
3. Scroll down to find Deployment details and record the following values from the Postman sample HTTP request:
4. POST /knowledgebases/<knowledge-base-id>/generateAnswer
5. Host: <your-host-url>
6. Authorization: EndpointKey <your-endpoint-key>
Reference:
https://docs.microsoft.com/en-us/azure/bot-service/bot-builder-howto-qna

Question 187
HOTSPOT -
You are building a chatbot for a Microsoft Teams channel by using the Microsoft Bot Framework SDK. The chatbot will use the following code.
AI-102_187Q_1.jpg related to the Microsoft AI-102 Exam
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Hot Area:
AI-102_187Q_2.png related to the Microsoft AI-102 Exam
Image AI-102_187R.png related to the Microsoft AI-102 Exam



Box 1: Yes -
The ActivityHandler.OnMembersAddedAsync method overrides this in a derived class to provide logic for when members other than the bot join the conversation, such as your bot's welcome logic.
Box 2: Yes -
membersAdded is a list of all the members added to the conversation, as described by the conversation update activity.
Box 3: No -
Reference:
https://docs.microsoft.com/en-us/dotnet/api/microsoft.bot.builder.activityhandler.onmembersaddedasync?view=botbuilder-dotnet-stable

Question 188
HOTSPOT -
You are reviewing the design of a chatbot. The chatbot includes a language generation file that contains the following fragment.
# Greet(user)
- ${Greeting()}, ${user.name}
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Hot Area:
AI-102_188Q.png related to the Microsoft AI-102 Exam
Image AI-102_188R.png related to the Microsoft AI-102 Exam



Box 1: No -
Example: Greet a user whose name is stored in `user.name`
- ${ welcomeUser(user.name) }
Example: Greet a user whose name you don't know:
- ${ welcomeUser() }
Box 2: No -
Greet(User) is a Send a response action.
Box 3: Yes -
Reference:
https://docs.microsoft.com/en-us/composer/how-to-ask-for-user-input

Question 189
HOTSPOT -
You are building a chatbot by using the Microsoft Bot Framework SDK.
You use an object named UserProfile to store user profile information and an object named ConversationData to store information related to a conversation.
You create the following state accessors to store both objects in state. var userStateAccessors = _userState.CreateProperty<UserProfile>(nameof(UserProfile)); var conversationStateAccessors = _conversationState.CreateProperty<ConversationData>(nameof(ConversationData));
The state storage mechanism is set to Memory Storage.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Hot Area:
AI-102_189Q.png related to the Microsoft AI-102 Exam
Image AI-102_189R.png related to the Microsoft AI-102 Exam



Box 1: Yes -
You create property accessors using the CreateProperty method that provides a handle to the BotState object. Each state property accessor allows you to get or set the value of the associated state property.
Box 2: Yes -
Box 3: No -
Before you exit the turn handler, you use the state management objects' SaveChangesAsync() method to write all state changes back to storage.
Reference:
https://docs.microsoft.com/en-us/azure/bot-service/bot-builder-howto-v4-state

Question 190
HOTSPOT -
You are building a chatbot that will provide information to users as shown in the following exhibit.
AI-102_190Q_1.png related to the Microsoft AI-102 Exam
Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.
NOTE: Each correct selection is worth one point.
Hot Area:
AI-102_190Q_2.png related to the Microsoft AI-102 Exam
Image AI-102_190R.png related to the Microsoft AI-102 Exam



Box 1: A Thumbnail card -
A Thumbnail card typically contains a single thumbnail image, some short text, and one or more buttons.
Incorrect Answers:
- an Adaptive card is highly customizable card that can contain any combination of text, speech, images, buttons, and input fields.
- a Hero card typically contains a single large image, one or more buttons, and a small amount of text.
Box 2: an image -
Reference:
https://docs.microsoft.com/en-us/microsoftteams/platform/task-modules-and-cards/cards/cards-reference





Premium Version