Launches Cloud-Based Voice and Vision AI Services Launches Cloud-Based Voice and Vision AI Services

Sensory Inc., a Silicon Valley innovator of machine learning solutions for speech recognition and biometric identification, announces the beta release of, a complete AI as a Service platform designed for processing voice and vision AI workloads in the cloud. Leveraging Sensory’s decades of experience with voice and vision AI, the SensoryCloud platform is launched with AI services such as Speech to Text, Sound Identification, Wake Word Verification, Face Verification, and Speaker Identification. Additional services and updates will be offered throughout the year.

With SensoryCloud, customers are offered a cloud AI platform that puts them in full control with a focus on flexibility and accuracy. Customers get complete control of how their AI solutions are deployed and how the data is managed and accessed. SensoryCloud delivers a language- and platform-agnostic AI inference engine wrapped in a highly-developed API. Further, the AI experts at Sensory leverage both open-source and proprietary solutions to ensure best-in-class performance and is delivered in containers through an API or light-weight SDKs.

SensoryCloud AI as a Service Includes:

  • SensoryCloud Speech to Text (STT) – A world-class GPU-accelerated speech recognition engine that can be quickly customized to process application-specific jargon. Ideal for either streaming or batch-mode operations with typical word error rates within trained domains of less than 5%.
  • SensoryCloud Wake Word Revalidation – Leveraging the experience of Sensory’s expertise in wake word detection, the cloud-based verification of custom, branded wake words enable up to a 90% reduction in false alarm events.
  • SensoryCloud Sound Identification – Offers an extensive library of sounds with a multi-stage approach optimized for speed, efficiency, and accuracy. Developers can quickly train and learn custom sounds, in addition to the standard sounds like alarms, sirens, breaking glass, crying babies, coughs, sneezes, doorbells, and more.

Sensory historically focused on AI on the edge. However, many embedded clients indicated a strong desire for cloud solutions with degrees of freedom not available from the typical cloud service providers. “We have a history of building fast and accurate AI models, and we paired this capability with some of the brightest and freshest minds in the cloud industry,” states Todd Mozer, Sensory CEO. “The result is a hybrid cloud platform that uses state-of-the-art AI to address customers unique needs for control, flexibility, cost, accuracy, reliability, features, latency, and privacy.”


Neurotechnology releases app for

Neurotechnology releases app for

Neurotechnology, a provider of deep learning-based solutions and high-precision biometric identification technologies, has released a mobile app that enables users to easily use image recognition models from their phone. enables users to build, train and deploy image recognition models without the need of an understanding of AI or deep learning. The app serves as a useful companion to the online platform, which itself has received a recent update.

The app, which is available on both the Google Play Store and the Apple App Store, brings a new level of convenience for users, as models can now be used immediately on the photos taken by a phone camera. app users have a wide range of pre-trained image recognition models to select from, or they can train a custom model using the web interface and make it available on their phone immediately. Additionally, the app enables users to:

  • Use the image similarity search function to upload a photo from a phone and then find similar photos within a dataset
  • Make predictions using photos uploaded from phone gallery
  • Use the AI-assisted labeling feature to save predictions as image labels in the dataset

The latest changes to the online platform focused on delivering the most user-friendly experience to date, through a range of new features and changes to the online dashboard and website. These changes included:

  • The introduction of a new pay-as-you-go wallet system that enables users to pay for only what they use on the platform
  • Object detection models became available for all registered users to build, train and deploy, whereas previously they had only been available for users on premium subscription plans
  • To benefit users with a large image dataset, users are now able to retrain full classification model networks, as well as being able to choose to automatically stop classification model training if there is no improvement for a specified amount of time
  • When using through the REST API, users are able to upload and delete images from a project
  • To broaden the accessibility of, new C# REST API code samples and Swagger specifications were added to provide users with the tools needed to incorporate into their own projects
  • In-depth user guides and video tutorials have been added to the website, to help every user make the most of the range of image recognition tools available on the platform