This post has been republished via RSS; it originally appeared at: New blog articles in Microsoft Tech Community.
We’ve listened to the feedback from our customers on how to create more accurate models while making the service even easier to use with several core Language Understanding enhancements. In addition based you your feedback we’re containers generally available in June. Finally, for developers that want to integrate Language Understanding into their CI/CD and release management pipelines, we’re previewing a sample repository template.
I cover these enhancements in more detail on the AI Show with Seth Juarez, but have also captured the key points below.
Core language understanding enhancements
Upgraded machine learned entities replacing composite and simple entities
We’ve introduced the ability to add sub-entities to machine learned entities, going up to 5 levels deep. This replaces composite entities and gives you more power to recognize more sophisticated entities, reuse them across your application and even recognize multiple actions in a single utterance. In addition, this top-down thinking to build a schema is more natural than the bottom up thinking that was needed when creating composites.
If you have an application that used the old composite entities, you can easily upgrade that app to use the updated machine learned entities to take advantage of this new functionality. This upgrade is seamless for you – you do not need to re-label any of your entities, and no changes are needed in your code. The upgrade experience creates a new version of your application for you to give you the option of testing it separately.
Not only can you build and recognize more sophisticated entities, but an added benefit of sub-entities is that it can improve your model’s accuracy by using entities as features.
Another piece of feedback we received is you can lose context when updating your entities and features while labeling. To address this, we’ve added the new entity palette (pictured below) which allows you to see all the ML entities and list entities you’ve created while you’re labeling new utterances. You can also edit your entities and add or edit features while labeling utterances.
Screen shot of the new portal entity palette.
Improved labeling tools
We’ve listened to your feedback about difficulties with labeling utterances and there are several changes to the interface in the portal to make this interaction easier. Now you can label entities from both the new entity palette or inline.
When you use the entity palette you can label a child node and the parent will be inferred, and when you label the parent it will automatically merge. Choose the entity labeler tool, then select the entity you want to label for and highlight it in the utterance.
Labeling with the new entity palette experience.
For customers that prefer inline labeling, we have improved that as well. Inline labeling supports labeling entities in any order with a cascading menu.
Labeling with enhanced inline labeling experience.
In addition, predictions are shown with a dotted line when a new utterance is added. If all the predictions are correct for a new entity, you can confirm them all in one click.
Viewing entity predictions and confirming to label.
Normalized word forms
We have addressed customer issues around recognizing variations of a word, for example, changing 'flight' to 'flights' would show very different results for intent predictions. To solve this, we've added a setting called 'Normalize word forms' that will help your model recognize plurals of a word automatically and generalize better. Currently available in English only, go to your application's settings in the Manage pane to turn it on.
Change from constraints to required features
If you were using constraints before, we’ve changed this functionality slightly. You can still constrain the output of a machine learned entity. Now you add a required feature to a machine learned entity, to ensure that entity won’t be predicted without the presence of the required feature.
A frequent customer request has been support for docker containers. It’s been preview and as of June 1, you can deploy and host Language Understanding anywhere using the GA of docker containers. When hosting the service in a container you have the flexibility to scale as much as you need without any limitations on TPS and you can use Language Understanding in scenarios where you don’t wish to send data to the cloud.
Dev ops sample
For developers that want to integrate Language Understanding into their CI/CD and release management pipelines, we’re previewing a sample repository template. This template enables you to develop a Language Understanding application while following DevOps engineering practices that adhere to software engineering fundamentals around source control, testing, CI/CD and release management. You can customize it for use with your own project. Learn more and try it out here.
Get started with Language Understanding today.
Watch the AI Show with Seth Juarez to see these enhancements in more detail.