You use drones to identify where weeds grow between rows of crops to send an instruction for the removal of the weeds. This is an example of which type of computer vision?
A. object detection
B. optical character recognition (OCR)
C. scene segmentation
Object detection is similar to tagging, but the API returns the bounding box coordinates for each tag applied. For example, if an image contains a dog, cat and person, the Detect operation will list those objects together with their coordinates in the image. Incorrect Answers: B: Optical character recognition (OCR) allows you to extract printed or handwritten text from images and documents. C: Scene segmentation determines when a scene changes in video based on visual cues. A scene depicts a single event and it's composed by a series of consecutive shots, which are semantically related. Reference: https://docs.microsoft.com/en-us/ai-builder/object-detection-overview https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/overview-ocr https://docs.microsoft.com/en-us/azure/azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-overview
Question 92
DRAG DROP - Match the facial recognition tasks to the appropriate questions. To answer, drag the appropriate task from the column on the left to its question on the right. Each task may be used once, more than once, or not at all. NOTE: Each correct selection is worth one point. Select and Place:
Box 1: verification - Face verification: Check the likelihood that two faces belong to the same person and receive a confidence score. Box 2: similarity - Box 3: Grouping - Box 4: identification - Face detection: Detect one or more human faces along with attributes such as: age, emotion, pose, smile, and facial hair, including 27 landmarks for each face in the image. Reference: https://azure.microsoft.com/en-us/services/cognitive-services/face/#features
Question 93
DRAG DROP - Match the types of computer vision workloads to the appropriate scenarios. To answer, drag the appropriate workload type from the column on the left to its scenario on the right. Each workload type may be used once, more than once, or not at all. NOTE: Each correct selection is worth one point. Select and Place:
Box 1: Facial recognition - Face detection that perceives faces and attributes in an image; person identification that matches an individual in your private repository of up to 1 million people; perceived emotion recognition that detects a range of facial expressions like happiness, contempt, neutrality, and fear; and recognition and grouping of similar faces in images. Box 2: OCR - Box 3: Objection detection - Object detection is similar to tagging, but the API returns the bounding box coordinates (in pixels) for each object found. For example, if an image contains a dog, cat and person, the Detect operation will list those objects together with their coordinates in the image. You can use this functionality to process the relationships between the objects in an image. It also lets you determine whether there are multiple instances of the same tag in an image. The Detect API applies tags based on the objects or living things identified in the image. There is currently no formal relationship between the tagging taxonomy and the object detection taxonomy. At a conceptual level, the Detect API only finds objects and living things, while the Tag API can also include contextual terms like "indoor", which can't be localized with bounding boxes. Reference: https://azure.microsoft.com/en-us/services/cognitive-services/face/ https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-object-detection
Question 94
You need to determine the location of cars in an image so that you can estimate the distance between the cars. Which type of computer vision should you use?
A. optical character recognition (OCR)
B. object detection
C. image classification
D. face detection
Object detection is similar to tagging, but the API returns the bounding box coordinates (in pixels) for each object found. For example, if an image contains a dog, cat and person, the Detect operation will list those objects together with their coordinates in the image. You can use this functionality to process the relationships between the objects in an image. It also lets you determine whether there are multiple instances of the same tag in an image. The Detect API applies tags based on the objects or living things identified in the image. There is currently no formal relationship between the tagging taxonomy and the object detection taxonomy. At a conceptual level, the Detect API only finds objects and living things, while the Tag API can also include contextual terms like "indoor", which can't be localized with bounding boxes. Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-object-detection
Question 95
HOTSPOT - To complete the sentence, select the appropriate option in the answer area. Hot Area:
Azure Custom Vision is a cognitive service that lets you build, deploy, and improve your own image classifiers. An image classifier is an AI service that applies labels (which represent classes) to images, according to their visual characteristics. Unlike the Computer Vision service, Custom Vision allows you to specify the labels to apply. Note: The Custom Vision service uses a machine learning algorithm to apply labels to images. You, the developer, must submit groups of images that feature and lack the characteristics in question. You label the images yourself at the time of submission. Then the algorithm trains to this data and calculates its own accuracy by testing itself on those same images. Once the algorithm is trained, you can test, retrain, and eventually use it to classify new images according to the needs of your app. You can also export the model itself for offline use. Incorrect Answers: Computer Vision: Azure's Computer Vision service provides developers with access to advanced algorithms that process images and return information based on the visual features you're interested in. For example, Computer Vision can determine whether an image contains adult content, find specific brands or objects, or find human faces. Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/home
Question 96
You send an image to a Computer Vision API and receive back the annotated image shown in the exhibit.
Which type of computer vision was used?
A. object detection
B. face detection
C. optical character recognition (OCR)
D. image classification
Object detection is similar to tagging, but the API returns the bounding box coordinates (in pixels) for each object found. For example, if an image contains a dog, cat and person, the Detect operation will list those objects together with their coordinates in the image. You can use this functionality to process the relationships between the objects in an image. It also lets you determine whether there are multiple instances of the same tag in an image. The Detect API applies tags based on the objects or living things identified in the image. There is currently no formal relationship between the tagging taxonomy and the object detection taxonomy. At a conceptual level, the Detect API only finds objects and living things, while the Tag API can also include contextual terms like "indoor", which can't be localized with bounding boxes. Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-object-detection
Question 97
What are two tasks that can be performed by using the Computer Vision service? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.
A. Train a custom image classification model.
B. Detect faces in an image.
C. Recognize handwritten text.
D. Translate the text in an image between languages.
B: Azure's Computer Vision service provides developers with access to advanced algorithms that process images and return information based on the visual features you're interested in. For example, Computer Vision can determine whether an image contains adult content, find specific brands or objects, or find human faces. C: Computer Vision includes Optical Character Recognition (OCR) capabilities. You can use the new Read API to extract printed and handwritten text from images and documents. Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/home
Question 98
Your website has a chatbot to assist customers. You need to detect when a customer is upset based on what the customer types in the chatbot. Which type of AI workload should you use?
A. anomaly detection
B. semantic segmentation
C. regression
D. natural language processing
Natural language processing (NLP) is used for tasks such as sentiment analysis, topic detection, language detection, key phrase extraction, and document categorization. Sentiment Analysis is the process of determining whether a piece of writing is positive, negative or neutral. Reference: https://docs.microsoft.com/en-us/azure/architecture/data-guide/technology-choices/natural-language-processing
Question 99
What is a use case for classification?
A. predicting how many cups of coffee a person will drink based on how many hours the person slept the previous night.
B. analyzing the contents of images and grouping images that have similar colors
C. predicting whether someone uses a bicycle to travel to work based on the distance from home to work
D. predicting how many minutes it will take someone to run a race based on past race times
Two-class classification provides the answer to simple two-choice questions such as Yes/No or True/False. Incorrect Answers: A: This is Regression. B: This is Clustering. D: This is Regression. Reference: https://docs.microsoft.com/en-us/azure/machine-learning/algorithm-module-reference/linear-regression https://docs.microsoft.com/en-us/azure/machine-learning/studio-module-reference/machine-learning-initialize-model-clustering
Question 100
What are two tasks that can be performed by using computer vision? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.
A. Predict stock prices.
B. Detect brands in an image.
C. Detect the color scheme in an image
D. Translate text between languages.
E. Extract key phrases.
B: Identify commercial brands in images or videos from a database of thousands of global logos. You can use this feature, for example, to discover which brands are most popular on social media or most prevalent in media product placement. C: Analyze color usage within an image. Computer Vision can determine whether an image is black & white or color and, for color images, identify the dominant and accent colors. Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/overview