Cognitive services are a specific subclass of [[Machine Learning]] algorithms to provide intelligence-like capabilities for sensors. For example they can take microphone input and use voice recognition to determine if a user is talking and to possibly identify the user. They provide speech analysis to turn the voice recording into text and then language understanding to extract the topics and intents behind the words. Cognitive services also provide object recognition for still pictures and video. They are able to identify objects, animals and people as well as their poses, activities and sentiments. They also analyze any kind of written or signage content. In other words: They are a collection of services to turn raw input from different kinds of sensors into actionable information. ![[Microsoft Azure Cognitive Services.png]] *Microsoft Azure Cognitive Services, Microsoft, https://azure.microsoft.com/en-us/services/cognitive-services/* Cognitive services are usually offered as a cloud-based family of services. Manufacturers and service providers can utilize these services and integrate them into their products. As a result customers can for example use natural language to talk to their smart lights, music systems, TVs or game consoles, which in turn use cognitive services to analyze these requests and turn them into actionable commands that they can react to. Security cameras use cognitive services to differentiate between the pet running around in the house and a stranger. ![[Google Nest Aware.png]] *Google Nest Aware, Google, https://store.google.com/us/product/nest_aware* The assumption is: Everything digital and connected will be able to utilize cognitive services to extend their abilities and understand what we say and do as well as the immediate environment. ## Key drivers for Cognitive Services One goal of the [[Internet of Things]] is to reshape the way humans interact with their physical environment. It might be economical that watches, fridges, ovens, tooth brushes or light switches evolve into digitally enhanced things with smartphone-like capabilities, but from a [[User Experience]] point of view it’s just not practical. Most objects have very reduced and focused user interfaces that have been optimized for their original purpose and will not be able to control the full set of digitally augmented abilities. For example a simple on/off light switch will not necessarily be able to reflect the full capabilities of modern light bulbs, which are also able to regulate their brightness or even color. Some objects might have enough surface area that can act as a digital screen, for example walls, tables and windows. Essentially this would turn the light switch into a smartphone. While this might work in some cases, not every object has the space for an additional screen and it’s just not practical to make every object more complex. ![[Brilliant All-in-One Smart Home Control.png]] *Brilliant All-in-One Smart Home Control, Brilliant, https://www.brilliant.tech/products/brilliant-control-smart-home-smart-lighting-three-switch* One way to solve this is to offload additional interfaces to a remote control. This might be a dedicated remote or a form of second screen, usually a smartphone or some form of control hub. While this will work, it is very tedious for users to pull out their smartphone, open the respective app and press a button just to turn on their oven. Users want direct control of the device they are using, even in their digitally enhanced state. The goal of cognitive services is to create more relatable interfaces by using natural ways of interacting with objects that otherwise have no obvious interface. They turn connected objects into smart devices by providing the "smarts" in the first place and allowing new ways to control digital aspects and abilities of objects. ## Summary **What it is**: Provide intelligence-like capabilities for sensors. **What it enables**: Everything digital is able to connect to digital services, used to "make sense" out of sensory data, adding the "smarts" to smart objects.