Select the directory option from the above "Directory" header!

Menu
Microsoft invests $25M in AI tech for disability solutions

Microsoft invests $25M in AI tech for disability solutions

The program, dubbed AI for Accessibility, comes off the back of the company’s $50 million investment in its AI for Earth program

Brad Smith (Microsoft)

Brad Smith (Microsoft)

Credit: Microsoft

Microsoft is investing US$25 million into a five-year program aimed at giving developers the artificial intelligence (AI) tools to build intelligent solutions to benefit people with disabilities.

The program, dubbed AI for Accessibility, comes off the back of the company’s $50 million investment in its AI for Earth program, announced last year and intended to use AI to unlock solutions to the planet’s climate, water, agriculture, and the biodiversity issues.

Microsoft’s latest AI program, announced at the company’s Build 2018 developers conference, comprises grants, technology investments and expertise. In a potential boon for developers tapping into the program, it will also see AI for Accessibility innovations incorporated into Microsoft cloud services.

“AI can be a game changer for people with disabilities,” Microsoft president and chief legal officer Brad Smith said. “Already, we’re witnessing this as people with disabilities expand their use of computers to hear, see and reason with impressive accuracy.

“At Microsoft we’ve been putting to work stronger solutions such as real-time speech-to-text transcription, visual recognition services, and predictive text functionality.

“AI advances like these offer enormous potential by enabling people with vision, hearing, cognitive, learning, mobility disabilities and mental health conditions do more in three specific scenarios: employment; modern life and human connection,” he said.

The new program will be run by Microsoft’s chief accessibility officer Jenny Lay-Flurrie, and will build upon the company’s existing accessibility team’s success over the past three years with developers and engineers across Microsoft.

According to Smith, the accessibility team’s expanded mission is to provide a new level of tools and support for developers around the world.

“The AI for Accessibility program will do this in three ways,” Smith said. “First, we will provide seed grants of technology to developers, universities, nongovernmental organisations, and inventors taking an AI-first approach focused on creating solutions that will create new opportunities and assist people with disabilities with work, life, and human connections.

“Next, we will identify the projects that show the most promise and make larger investments of technology and access to Microsoft AI experts to help bring them to scale.

“And third, as we infuse AI and inclusive design across our offerings, we will work with partners to incorporate AI innovations into platform level services to empower others to maximise the accessibility of their offerings,” he said.

The announcement comes as Microsoft unveils a host of new tools at its Build 2018 event for developers to build AI solutions and multi-device, multi-sense experiences.

Among the new AI developments is Microsoft’s move to open source the Azure IoT Edge Runtime, letting users modify, debug and have more transparency and control for edge applications.

Custom Vision will now run on Azure IoT Edge, enabling devices, such as drones and industrial equipment, to take action quickly without needing cloud connectivity.

Indeed, Chinese drone company DJI is partnering with Microsoft to create a new software development kit (SDK) for Windows 10 PCs. Microsoft revealed that the company has also chosen Azure as its preferred cloud provider for its commercial drone and software-as-a-service (SaaS) solutions.

Microsoft also announced a joint effort with Qualcomm Technologies to create a vision AI developer kit running Azure IoT Edge. The solution is expected to make available the key hardware and software required to develop camera-based IoT solutions.

At the same time, eight years after first shipping Kinect, Microsoft announced its new Project Kinect for Azure, a package of sensors, including our next generation depth camera, with onboard compute designed for AI on the edge.

In the words of Microsoft, Project Kinect for Azure empowers new scenarios for developers working with ambient intelligence.

Meanwhile, a new Speech Devices SDK announced at the event is designed to deliver audio processing from multi- channel sources for more accurate speech recognition, including noise cancellation, far-field voice and more.

The new offering is expected to help developers build a variety of voice-enabled scenarios like in-car or in-home assistants, smart speakers and other digital assistants.

The company also announced a preview of Project Brainwave, its architecture for deep neural net processing, which is now available on Azure and on the edge, and is now fully integrated with Azure Machine Learning.

Microsoft said that new Azure Cognitive Services updates, including a unified speech service with improved speech recognition, text-to-speech, supporting customised voice models and translation, will make it easier for developers to add intelligence to their applications.

Additionally, Microsoft announced a new partnership with GitHub that is intended to bring the power of Azure DevOps services to GitHub users.

“Today, we released the integration of Visual Studio App Center and GitHub, which provides GitHub developers building mobile apps for iOS, Android, Windows, and macOS devices to seamlessly automate DevOps processes right from within the GitHub experience,” Microsoft said in a statement.



Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags MicrosoftAI

Show Comments