The benefits of Google Glass to your Supply Chain
Steve Brady's first column: As we enter the age of 'wearables' Google Glass can provide your supply chain with a new way to solve old problems.
Google Glass has been out “in the wild” for a little over a year and with the number of units still only in the tens of thousands Google Glass has captured the imaginations, and stoked the fears, of many. In fact, a recent study found that nearly half of Americans recognize Google Glass when they see it. Some hail Glass as ushering in the era of “wearables” and changing the face of mobile computing while others lament the invasion of privacy.
What Google Glass can inspire is a new way of approaching old problems. This is often the case with new technologies. We first see the technology as a way of doing the old thing better, or faster, or cheaper. It is this adaptation that provides the fuel for the engine of innovation allowing the technology to gain a foot-hold and springboard into newer and truly life-changing ideas. Over time, we see the real shift from doing old things better to truly doing “new things.” With cell phones we still made our calls, but where and when we wanted. Then we started ‘texting’ and then, with the advent of the smartphone, we saw truly new ways of connecting emerge.
What does Google Glass (or really, any wearable) have to offer the supply chain professional? Certainly there are no “out of the box” solutions ready for implementation today, but the opportunities are there for the taking with just a little imagination and some coding.
When I became a Glass Explorer I was less interested in the camera than I was the opportunity to deliver information directly to my eyeballs. The ability to receive information, hands free by talking and having it pop up on to my display seemed to be the transformational product I was seeking. After 9 months, I have come to appreciate the potential when all the elements of Google Glass are combined.
In just the past month I have shared Glass with several hundred people who are actively engaged in a variety of professions from logistics and supply chain operations to public utilities. It is always a pleasure to share Glass, and introduce people to the Glass interface and watch their face as they ask Google a question and see and hear the answer. But lately, people are not only excited to see Glass but seem to be more excited by the potential. I have found myself shifting from “show and tell” to show--and LISTEN.
For instance, as people who work in a warehouse put glass on for the first time, they ask question about how it can “read” data. Can it read a barcode? (Yes). Can it connect to a database? (Yes.) Can it display what the database tells it? (Yes). From these simple questions comes a series of good ideas. Just eliminating having to go to a computer to look up information in a database is seen as a great step forward. But then more ideas flow.
One person suggested that a “pick order” can be issued that not only tells the picker what to get, and where, but can show a picture of the item. This provides a quick visual check when picking. After the item is picked they can then read the barcode on the item to verify the pick and be given either the next item, or told where to deliver the item. This has significant potential for improving pick rates and accuracy especially in “chaotic storage” systems similar to the one used by Amazon.
Another idea I heard is to provide directions on picking routes. While technology may still not allow us to provide true navigation inside warehouses the ability to display a floor plan, with a simple “you are here” on the screen, can help guide workers through complex warehouse or facility floor plans. This can be accomplished by knowing where someone starts and then knowing where they are when they arrive to pick an item. What if the system doesn’t “know” where you are? Simply read the barcode on a shelf, and the system can then re-orient itself based on it’s knowledge of the facility. Of course, if you tie this knowledge with the built-in compass, accelerometer gyroscopes, and other sensors in Glass, you can begin to see the initial stages of internal navigation for optimized routing inside warehouses.
In the previous examples Glass ties in with an existing system to leverage the data to improve operations. That model could be flipped, and Glass can be used to instead build data sets. Several people who tried Glass are involved with inspections of various kinds from vehicle inspections, home/facility inspections, or inspecting public utility infrastructure. In each of these cases they need to document what they are seeing, either to prove exceptions or violations, or to document the work is completed. Glass can help here, as well.
Glass could be used to capture a photograph, and then dictate notes about what is seen, delivering the information to a database that can then be used to flesh out reports. These people liked the ability to just say “take a picture” and then “add comments,” perhaps followed by “add to report” but they felt the most compelling benefit was to do this while having both hands free. They can hold, lift, or push aside, anything that would hinder their ability to get in with a traditional camera, and since the camera is located right above their eye, if they can see it, they can “snap it.”
This then leads to what many say was the most compelling benefit to Google Glass--the ability to be used completely “hands free.” Whether it is simply looking up to see information updates, getting navigation by asking for directions, or documenting inspections while their hands are otherwise busy, Google Glass is seen as liberating.
Google Glass is still an infant technology, and there is much work to still be done to realize the potential of this and other wearable technologies. The ideas here are just transferring existing activities to a new technology. But even these ideas could enhance productivity and improve quality. Imagine what the next wave of wearables can do when we take the next step and truly find new and unexpected uses. We are only limited by our imagination, and if the past 30 years of technology have shown humanity is never at a loss for imagination.