Building Call for Code apps with AI
Learn how to incorporate AI into your apps to help mitigate the risks from natural disasters.
Welcome back to the second publication in our Call For Code Technology mini-series where I identify and talk about one of the six core technology focus areas within Call For Code. You’ll learn about that technology, how to best use it on IBM Cloud™, and where to find the best resources to fuel your innovation. If you missed my first post about building Call for Code apps with IoT and Node-RED, you can find it here.
First things first, if you haven’t already, accept the Call for Code challenge and join our community.
In Part 2 of this series, I’m going to talk about artificial intelligence (AI) and how IBM Watson™ services can help you. I’ll show you just how powerful the suite of IBM Watson AI services can be!
AI can be defined in a few different ways, depending on the lens you’re looking through. Some people think of AI as the virtual assistant on your phone, such as Siri, while others envision AI as a super computer that can do anything, including taking over the world! The reality is that AI is currently somewhere in the middle of those two things.
For a more in-depth read about the evolution and complexity of AI, check out this article, “The languages of AI,” as it discusses the beginnings of AI and how it evolved to where we are today.
AI can be better thought of in this way: how closely can a computer simulate a real person in completing a task? That task can be analyzing an image, understanding a document, deciphering voice input, and performing an action on it, and more. When it comes to AI, IBM Watson is quite good at those things and then some. Let’s talk more about the technical capabilities of Watson and show you how you can get started.
The power of IBM Watson
To understand just how versatile and powerful Watson is, take a look at the complete offering of Watson products and services. As you can see, Watson’s AI ability is very comprehensive! It can understand and process complicated, unstructured data by using the Natural Language Understanding service with the ability to extract entities, relationships, keywords, and more. Want to go beyond just a basic chatbot and implement a true virtual assistant? Take a look at Watson Assistant, which is pre-trained with industry-relevant content and knows when to search through a knowledge base, when to ask a user for clarification, or when to transfer you to a human. What if you want to take Watson Assistant a step further and understand more about the users on the other end and how they’re feeling? Tone Analyzer can do just that! It can understand tones and emotion to predict the state and mood of the user.
All of these services can be molded into a solution that fits into our Call for Code competition. Another service I’d like to highlight is Watson Visual Recognition. This Watson service was highlighted in our Call for Code 2018 Global Competition by PD3R. They built a custom, visual recognition model by using the Watson Visual Recognition Engine to train their classifier in Watson Studio. They also used IBM Watson APIs to assess structural damage on buildings to determine if houses could be retrofitted, rather than rebuilt completely.
These are just a few examples of what Watson can do. Watson has even more capabilities in the data science and machine learning areas, a topic that we will discuss more in-depth in the coming weeks.
Getting started with AI for Call for Code
If you don’t already have an IBM Cloud account, the first step is signing up, which takes less than two minutes. Just ensure that you use a valid email address because you must confirm your email address before you can create any services.
If you really want to “wow” everyone in the Call for Code 2019 Global Challenge, pay attention to the two fantastic resources I’m going to give you that show the power and ease of use of Watson AI. I’ll even give you a sample idea that you can expand on.
Frequently in the aftermath of a natural disaster, people can feel hopeless, have a lot of questions, and generally just need help. Wouldn’t it be great if you could set up a mobile app that integrates a virtual assistant that helps people by giving them access to the resources they need? Luckily, such individual solutions are already out there, they just need some slight tweaking to bring a larger idea to life.
We talked last week about the vast selection of IBM code patterns and how those complete solutions can become a great starting point for your Call for Code submission. Check out the dedicated section of AI code patterns. Also take a look at this code pattern written by IBMer Steve Martinelli that creates an iOS app with Watson Assistant integrated right into it. This would be a great base to start with because the foundation is all there for you.
You can combine that code pattern with another code pattern written by IBMer Muralidhar Chavan that utilizes an “interface bot” that handles user interactions, interprets the input, and sends the query to the specialized AI with domain knowledge about that query. The interface bot essentially acts as a broker that reaches out to the right AI each time when a user is interacting with it.
Combining these two uses of Watson AI could really help people get the help they need after a natural disaster when humans might not be available to instantly help. Building upon this idea could make you the winner of the Call for Code 2019 Global Challenge – you never know!
This week we learned about AI and how versatile and powerful IBM Watson services are, how Call for Code 2018 Global Challenge runner-up PD3R used Watson Visual Recognition to help people rebuild after a natural disaster, and how two excellent code patterns can really jump-start your Call for Code 2019 submission.
I’ll be back soon with Part 3 where I’ll talk about traffic and weather resources and how they can help fortify your Call for Code 2019 submission.
In the meantime, follow my work in GitHub.