PL-100 Microsoft Power Platform App Maker – Describe AI Builder models
September 6, 2023

1. 67. Identify model types including prebuilt and custom models

Hello and welcome to this section on the AI builder. Now the AI builder is not included in your standard Power apps or Power automate subscriptions. It’s also not included in the developer plan. So if I go down on the left hand side to AI builder build, you can see at the top, we can start a 30 day trial to try it out. So I’ll click on start free trial and there are free Charles started. Now the free Charlie is for 30 days, but you may be able to extend it three times after that, you will need to pay to use it.

Now in this video, what we’re going to do is identify different model types, including prebuilt and custom models. And actually they’re very easy to identify. The custom models are these five at the top and the pre built models are the ones at the bottom. So what do we mean by pre built and custom? Well, pre built is no additional configuration is required for you to actually use them. They’re ready straight out of the box.

The custom ones, you do have to train them. So let’s just have a look at each of these types. First of all, we’ve got category classification, so that categorizes text by meaning. Now notice that’s available as both a pre boat and as a custom model. So you can see the brackets preview. So it’s fairly new. It used to be you had to train it. Now there is a version that you don’t have to. And it’s the same with entity extraction. So both the pre Bolt and custom models try to extract things.

So the prebuilt model looks for locations, it looks for numbers, for instance. Then we’ve got form processing so that extracts text from images. We’ve got object detection so that identifies objects from uploaded images. And the prediction model, will something happen based on past historic data. So the pre built models, the business card reader, well, that uses the form processing model and tries to extract information as a business card. We’ve already had a look at a category classification and entity extraction. The ID reader at the moment is for passports and US driver licenses. It doesn’t retain the image, by the way, the invoice and the receipt processing.

While they try and get information from invoice and receipts, the difference is that receipts are generally not the big a four or letter size. It’s for the smaller things invoice that’s more cater to four pages. Keyphase extraction. So you’re getting trends from Twitter surveys, emails and forms, language detection. So there you can find out what particular language you’re looking at in terms of text sentiment analysis. So is it positive, is it negative, is it text neutral, is it mixed? So you can imagine social media, customer feedback and email sentiment.

Text recognition model. So we’re talking about optical character recognition, OCR from printed and handwritten text from images. So talking car license plates, handwritten notes, and ID cards as well. The bit of an overlap. So that’s a fairly high up introduction to the AI builder. We have got prebuilt models, no configuration required and custom models. Now all of them can be used in Power Automate. Most of them can be used in PowerApps. It keeps changing, it keeps improving.

 But at the moment, the ID reader, the invoice processing and the language detection of the unit, the ones that can be used in power automate, that doesn’t mean you can’t use them in your apps. Your apps can call a Power Automate floor and then receive information back. But they are native to power Automate. All the of us can be used in PowerApps. In the next video, we’re going to have a look at how we can train our custom models.

2. 68. Describe the process for preparing data and training models

In this video we’re going to look at the process for preparing data and training models. Now please note, you won’t actually have to do that for the PL 100, the actual training. It’s just not the theory behind it, the actual doing of it, that’s more for the PL 200 certification. So, let’s have a look at the text category. So what this does is it categorizes text by meaning so.

So it’s a powerful tool that helps me make quick changes, can be categorized into Good, quick and powerful, for instance. And you can see some other ideas on the screen. So the idea is you get all of this text, you categorize it, and then you can make some suggested reviews of all your text. So for instance, what percentage had easy, what percentage had good or Quick, for instance. Now notice this text didn’t actually include the word good. So it’s not extracting the words, it’s actually analyzing it.

So if I create this, we can see what the process is behind it. So first of all, you need to select the text. So select the table and column where text is stored. So you can see it uses the database for that. So maybe you’ve got Power Automate, maybe you are getting all of your relevant tweets and storing them into a table in your data verse. And then you are seeing what the category it has. Then we select tags.

So you’ve seen the sort of tags that we’ve had before. Good, Firm, Need Improvement, select the language, you review the tags and then you go at the bottom to train my AI. So this is the text category classification. So it can be used for things like sentiment analysis, positive, negative, neutral, mixed. It can look for spam detection and maybe for customer request routing. So maybe you’re asking for features and if it’s on one particular feature, then it should go to somebody. If it’s about something else and it goes to somebody else. Next we have a look at the entity extraction model. So this is when we’re extracting things and you can see some examples on the screen.

So, just to give you some ideas of how you could use this custom model, the pre built model looks for locations, so city, continent, country, region, state, street, address, zip or postal code. It looks for things like numbers, dates, emails. So I’ve got age, color, date, time, duration, email, event, language, money, number ordinal organization, percentage, person’s name, phone number, speed, temperature, URL or weight. Now should note that documents cannot exceed 5000 characters and both the prebuilt and custom are available in English, Chinese, simplified, French, German, Portuguese, Italian and Spanish.

So if we have a look at the process for creating this as the custom model, then you select a table in your database, you select column, you select the language, and then you create new entity types with at least five examples. For example, if the entity type was appearance. Then you could have this guy was and in squiggly brackets, good looking. You can also modify an existing entity type. But again, you’ll need at least five examples and you can also deselect any prebuilt entities. So this is the entity extraction model. It looks for things like locations, numbers, date, time, emails, that sort of thing.

Next we have the form processing model. So this extracts text from images. So it works on JPEGs, PNGs and unprotected PDF files. Now, text embedded PDFs are best because you’ve actually got the text in the background of the PDF. So the computer can just extract it much more easily, much more reliably. So let’s see what you need. First of all, five plus documents with the same layout.

So that is called a collection. So first of all we need the fields. We need the name of fields with information to extract. Then we need the tables and columns to be extracted. We create collections. So we need a groups of at least five documents with the same layout. We tag them after the AI builder detected the fields and the tables. So we tag the fields, the information to extract, and the tables and the columns. And then the model can be trained. So that is the form processing model. It extracts text from images, works best on JPEGs, PNGs and PDFs. And you need at least five documents with the same layout. Next we go to the object detection.

So this identifies objects from uploaded images and counts the number of objects included. So first of all, what you need to do is select the domain. This is the use model and there are three different domains. So the first one is objects on retail shelves. These are where products are densely packed and you can see that these have been trained to say mouse and adapter.

There are brand logos, so logo detection and then there are common objects. So that is anything else. Then we need to provide object names. So this can be up to 500 per model. We can enter them here into the AI or we can select from a database the database. We then upload the images from local storage SharePoint or Azure Blob storage.

We then tag them. We need at least 15 images per object name. So that is the object detection model. It is used for objects on retail shelves, brand logo and common objects. And you need at least 15 images of each object. And then finally we’ve got the prediction model. So will something happen based on past historic data? So this could be a binary prediction? Yes.

No, this could be a multiple outcome prediction, so more than two outcomes. Or this could be a numerical prediction. So to do this, first of all you need to select the table that contains the data and the outcome that you want to predict. Then you need to select the columns that contain the outcome now, you’ll need at least 50 rows per historical outcome and ten rows at least for each type of outcome. So true. False.

Or he chose the fourth car. He told he chose the Persian car, that sort of thing. After training, the AI builder gives you a grade A, which is best to D. Something is wrong. Finally, what happens to all of these models? Well, it goes into AI builder models. So you can see we got these as drafts, which you can come back to later. At the moment, I can delete and edit drafts. When I’m happy with my model, I will have a publish option and it’s then available to use in Power, Automate and Power apps where appropriate. Now, just like Canvas apps can have all of these different versions.

You can have up to three different versions of each model, the current published version, the last trained non published version and a draft not trained version. When you’re editing your model, you can start from either the current published version or the last trained non published version. And then finally you can share your model by going to share and models that you have received from somebody else will be in the Shared with me as opposed to the My Models section. When it’s shared, you can use it in Apps and Flaws, but you can’t view details or edit it.

So in this video, we’ve had a look at the five different custom training models. We’ve had a look at category classification where you need at least ten items of entries for each tag. Entity Extraction we need at least ten example sentences to extract the information. Form processing, where we need at least five or more documents with the same layout.

Object detection, 15 or more images of each object and that can do objects and retail shows, brand, logo and common objects. And then finally we’ve got the prediction will something happen? Based on past historic data, we need at least 50 rows in each historical outcome table and we need at least ten rows per outcome value. Yes, no, blue, red, etc.

Leave a Reply

How It Works

Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!