(Transcript from RD Perspectives video with Jen Stirrup interviewing Dan Hermes about AI.)
We’re hearing a lot about AI. Will it bring about a new kind of business?
First it will help us to do our existing business better.
And how is AI going to do that?
We’re parsing the business decisions within each business process and determining which ones are best modeled by machine learning. Then we use machine learning to make those decisions faster and better.
The first step in any new technology is helping us do things we already do, but better. Any decision that a business makes, AI may be able to help them make it more efficiently and accurately. Here are some types of those decisions:
– Supply Chain
– Logistics
– Human Resourcing
– Expert Assistance (diagnoses)
– Customer Relationship Management
– Customer profiling and preferences
– Forecasting
– Optimization
Could you give a real world example?
Take insurance claims, for instance, because those model well. A customer submits a claim. Then the customer must wait for the adjuster to get through their stack of claims and reach theirs. When the adjuster reviews the claim, they follow policies, but the ultimate decision lies with the adjuster. Give the claim to a well-trained AI and the company policies and actuarial tables are internalized by the AI, so the claim can be processed in a minimal period of time. If the AI requires human intervention, it asks for it, but otherwise, machine learning can process many insurance claims.
How about something more tangible, like products?
Inventory is an area AI can improve our bottom lines. By modeling supply chains, buying seasons, warehousing patterns, and customer habits, AI can make educated buy recommendations to our corporate buyers. We can reduce unused merchandise in warehouses and product shortages when there’s a demand that can be accurately modeled. In short, we can create supply chains that are more attenuated to the demands of the market than before.
It sounds like AI can help us solve old problems more effectively.
Exactly.
What industries are using AI?
Which ones aren’t? Here are a few that I know of:
– Finance
– Healthcare
– Biotech
– Media and entertainment
– Transportation
– E-commerce
– Manufacturing
How does an AI “learn”?
Machine Learning builds models using these three approaches:
– Supervised learning when an algorithm learns from example data and target responses. This data might include numeric values or string labels such as classes or tags. Later, when posed with new examples, ML can predict the correct response.
– Unsupervised learning. ML learns from examples without any associated response. Thus, the algorithm determines the data patterns on its own.
– Reinforcement Learning. ML is trained to make specific decisions from the environment. In this way, the machine captures the best possible knowledge to make accurate decisions.
Then how does AI use that learning?
AI applies those models to the business decisions at hand: buy vs. sell, schedule now vs. defer, deplete stores vs. restock, approve or deny, etc. If the AI can match the scenario to an appropriate model then it employs the model to make a recommendation or a decision.
What are some of the big companies doing?
Google Maps
– Locates fastest routes based upon past traffic patterns and current conditions
– Predicts when, where, and how difficult finding an empty parking spot will be based upon dispersion of spots
Netflix
– Personalized recommendations of movies and shows.
– Generating series and movie banners, designs, and artwork
– Analyzes each movie title screen. Tests the images among the communities
– Picks the best images based upon this reinforcement learning.
eBay
– Product recommender chatbot, called ShopBot that helps understand what users are looking for.
– Processes their text messages and images and finds the best match.
– Excellent contextual understanding and friendly language.
– Competitors’ users tend to ask more questions about products than on eBay.
What is a chatbot and what does it have to do with AI?
A chatbot a language interface to an AI. Chatbots give us a text box in which to type our questions in natural language and the AI will attempt to understand our meaning and respond to it intelligently and productively. The key word here is “attempt”.
Because it doesn’t always happen. We type in a tough question only to be met with a page of boilerplate support tips and links. That’s where AI comes in. Interpretation of our natural language is the beginning of giving a useful response. “What is the best case for my iPhone?”, for example, might prompt a series of questions from the chatbot to determine what kind of iPhone you have and what your taste in cases might be. All this could result in a response: “Invisible Armor by Scratchless and the Phone Sock by Hangups look like good options. Would you like to see some?” That could be a good chatbot.
Will AIs replace employees?
They already are, but not yet at a fast pace. Certain tasks (like processing insurance claims) lend themselves well to machine learning while many other tasks are still too complex to be handled by a machine alone. This is leading us into the age of AI-aided decisions, or expert assistance.
Haven’t computers aided decisions for a long time?
Of course computers, reports, data, knowledge-based systems have assisted in business decisions for years. Eventually however, most major decisions will be accompanied by a “suggestion program”, borne of an AI that has models of many situations relevant to the decision at hand. Doctors have AIs recommending and aiding diagnosis. Stock brokers are guided by AIs modeling the market. In these cases of AI-aided decisions, the AI reaches its own conclusions but the business decision is typically left to the human being, which is why it’s called expert assistance and not fully automated.
I’m ok with my doctor making the final call.
Lol, yes but not everybody is. Index Funds, for example, have emerged as funds that are entirely automated models of certain segments of the market. This is an example of AIs running the show. A NASDAQ Index fund, for example, that contains a representative investment of NASDAQ stocks and is adjusted accordingly and proportionally each time the NASDAQ changes in volume or value. This is a broker and manager-free experience where you’re trusting an automated manager AI to handle your investments for you according to a specified set of requirements.
We hear a lot about data, numbers, and text with AI. What about physical things?
Companies like Boston Dynamics are making tremendous strides in modeling the movement of living creatures. Humans, such as soldiers, gymnasts, acrobats, and others with extraordinary physical abilities are being modeled and embedded in robots containing an AI that can employ the models to move and do tasks. Robots in the form of animals such as dogs and wolves are also in testing and production. Much of this appears to be Department of Defense (DoD) funded. They’re creating an army of robot AI warfighters. The ethical considerations in projects like these are substantial. Do we give these robots a gun?
Similar physical AIs can be found at amusement parks such Disney. There are Marvel-inspired trapeze artist robots working without wires (but with nets) that catapult into the sky, do flips, then strike super-hero poses before plunging down into the net to bounce and do it again. These AIs model human circus artists sporting bodies that can fly through the air with the greatest of ease better than most of us. But these are specialized machines. You wouldn’t want one of these to try and make you an omelet.
And of course there are AIs in many types of robots: building cars, baking cookies, mixing chemical solutions, surveying land, and working in Amazon warehouses. Instead of a set of specific commands for a robot to obey, these more sophisticated machines are employing models to help determine their behavior. Which is a lot more like the way humans do tasks.
Should we be concerned about a Terminator-like scenario, where the machines take over?
We shouldn’t rule it out, experts tell us, so caution as we go is the best course. It’s apparently not likely that a machine will become self-aware and plot an AI coup. It’s far more likely that an AI in, say, a paperclip factory will become a bit too creative in its quest to manufacture paperclips (Bostrom’s paperclip maximizer). It may, say, decide to tap into local bank systems to acquire funds to help it make paperclips. It might remove some humans from a warehouse that it has repurposed for paperclip-making functions without notifying the humans, and without giving them warning to defend themselves as it removes them to the recycling facility. For example. Such an unhinged manufacturing AI may decide to clone itself into ten major cities and set up new offices, acquiring office and warehouse space with funds it “borrowed” from banks, hiring real employees, and setting up other AIs to oversee business processes and paperclip production. A nightmare scenario, no doubt, but without the proper checks on AI behavior, giving them substantial abilities is bound to lead to some interesting scenarios. Ones we need to try and anticipate and avoid.
Those are some risks worth noting. Now what sorts of benefits can we expect from modeling?
There are two types of supervised machine learning tasks, predicting values, which is called regression, and predicting classes of items, or classification. Regression is the one that will give us an estimate of units to buy that quarter and classification will look at a conveyor belt full of products and tell us what’s what.
In training models, what do we need to look out for?
Training a machine learning model is as much an art as it is a science. You want enough data but not too much. You want to be certain that the training data applies to the specific types of scenarios you’re interested in. Data can be copious but flawed, missing key elements, and having elements irrelevant to the task at hand. If you train your model too much it becomes too specialized to particular situations, and this is called overfitting. If you don’t train it enough the model may not be sufficiently accurate, and that is underfitting.
What technologies are data scientists using to build models?
Many data scientists forego a fancy framework in lieu of a sleek IDE with a simple language. The most popular for getting started are Tensorflow and Scikit-Learn using Python. Enterprise environments might require industrial-strength options such as IBM Watson or Azure Machine Learning.
What is a neural network and how is that different from regular machine learning?
Neural networks are for machine learning, too, but they’re a way of storing the data in a form that looks like neurons in the human brain: connections, axons, synapses, impulses between neurons, those sorts of things. These constructs are called artificial neural networks (ANNs) This is supposed to bring the end result a bit closer to how the human brain may work, emulating our own organic logic units. Data scientists often being building ANNs with Keras, an API for building and training neural networks. ANNs are at the core of Deep Learning.
What is Deep Learning?
ANNs bring about a multi-layered approach to modeling, leading us to think of the models as deep, and we describe the training process of these deep networks as deep learning. Neural networks with many layers are called deep neural networks (DNN), so DNNs and deep learning have to do with the scale of the ANNs. Larger, multi-level ANNs are DNNs and require deep learning to get them there.
Deep learning powers such feats as the classification of billions of images in Google Images, Apple Siri’s speech recognition, and the world champion Go player, DeepMind’s AlphaGo.
Where do I get started with AI in my business? What attitude and approach is necessary to drive adoption in AI?
Some careful thought, creative experimentation, and diligent work could make your business even smarter.
People need AI attitude; creativity, diligence, and resilience to work smart.
For example, Look at your business processes. Identify the ones where decisions are challenging. Ask yourself why the challenges. If the answer is that the historical data is too complex to derive knowledge from then this could be a good candidate for a machine learning. Well-documented decision-making processes can also sometimes be good candidates for machine learning using reinforcement learning.
You’ll need to identify which of these processes lend themselves best to modeling. Here you’ll need an educated review of your business processes and possibly some test modeling to determine if improvements can be made using AI. Not all problems can be solved this way and finding ones than can requires initiative, commitment, and expertise.
In a nutshell, make a list of your top processes you’d like to consider improving upon. Then engage some data scientists or AI professionals to help you review your options.
Check back for the link to the RD Perspectives video here.