Cognitive

In the new age of mobile application development, Cloud Computing and Big Data are merging into a trend that utilizes both remote computing and large-scale computation: Cognitive Computing.

Massive data sets of the world around us are compiled every second (images, videos, audio, and text) and we quickly and accurately sift through that data to reach meaningful conclusions.

Microsoft Cognitive Services implement cognitive computing and employ machine learning to provide actionable insights using vision, speech, language, knowledge, and search APIs.

 

Bots are the human language and communication emissaries of cognitive computing, and Azure Bot Service provides a foundation for building custom bots to allow humans to interact with machines in productive ways, and we can show you how.

Image Processing

The Computer Vision API identifies people and objects with a reported level of Confidence. Individuals are identified, what they look like, what they are wearing, their age and demographic, what they are doing, and if they are part of a group. Objects are identified, such as buildings, houses, natural features such as rivers or mountains, or household objects such as dinner rolls or flowers, then placed in a context such as a city, a plate of bread, or a train station. Tags denote the notable aspects of the image, the most prominent or identifiable images that help determine what the image is “about”.

Cognitive Services

Effective cognitive computing requires easy-to-use service endpoints consumable by apps with images, audio, and other media and data that needs to be processed by sophisticated cognitive systems that return straightforward and usable results. Our team gets their hands on cognitive algorithms quickly using Microsoft’s Cognitive Services APIs.

Face Detection

The Face API imbues apps with the ability to identify a person using an image of their face. The API compares two images containing faces and reports on how well they match up. This is accomplished using proportions of the head, hair color, and facial landmarks such as eyes, eyebrows, nose, and lips.

Emotion Detection

The detection of human emotion based upon facial expression allows systems to understand how people may be feeling. The Emotion API is invoked using a simple URL call which uploads your image containing one or more faces. Cognitive Services processes the image and returns emotion indices for each face such as anger, contempt, fear, happiness, and surprise.

Language

While speech recognition determines what a person is saying, language understanding extracts deeper meaning such as topic, sentiment, and desire. We build custom language models to interpret what a person wants using the Language Understanding Intelligent Service (LUIS). We map human utterances in natural language to entities and intents to know what object or person someone is talking about , how they feel about it, and what they would like to see happen with it.

Knowledge

We have the ability to search complex data using natural language queries using the Knowledge Exploration Server (KES). We define your data schema and populate it with your data. We then construct query grammars used to parse language requests and extract and filter data, then host your query engine as a service online. We employ natural language understanding to evaluate queries, offer intelligent recommendations, query auto-completion, and semantic search.

Speech Recognition

The Custom Speech API provides a powerful speech recognition system exposing acoustic models and language models for customization. Identifying and verifying a particular speaker is a next step in speech cognition and is provided by the Speaker Recognition API.

Search

Although not strictly a cognitive function, the search of web pages, images, news, and video is often a necessary part of cognitive projects. The Bing Web Search API provides a search engine which consumes search query terms and produces JSON search responses.