We tend to think of innovation as being about ideas. A lone genius working in a secret lab somewhere screams “Eureka!” and the world is instantly changed. But that’s not how the real world works. In truth, innovation is about solving problems and it starts with identifying useful problems to solve.
It is with that in mind that IBM comes out with its annual list of five technologies that it expects to impact the world in five years. Clearly, each year’s list is somewhat speculative, but it also gives us a look at the problems that the company considers to be important and that its scientists are actively working on solving.
This year’s list focuses on two aspects of digital technology that are particularly important for businesses today. The first is how we can use digital technology to provide a greater impact on the physical world in which we all live and work. The second, which is becoming increasingly crucial, is how we can make those technologies more secure.
1. AI Powered Microscopes At Nanoscale
In the late 17th century a middle-aged draper named Antonie van Leeuwenhoek became interested in the magnifying glasses he used to inspect fabric. From those humble beginnings arose the new age of microscopy which has helped produce countless major discoveries over the last 450 years.
Today, IBM hopes to spur a similar revolution with nanoscale microscopes powered by AI. Unlike Leeuwenhoek’s version, these will not use optical lenses, but optical chips like the ones in your cell phone, except shrunk down small enough to observe microscopic cells in their natural environment and use AI to analyze and interpret what it sees.
As Simone Bianco and Tom Zimmerman, both researchers at the company, explained in their popular TED Talk, these devices can help us to better understand how plankton in the world’s oceans behave in reaction to stimuli and help mitigate the effects of global warming.
It is also partnering with the National Science Foundation (NSF) to transform our body’s cells into microscopic sensors. With a greater understanding of what’s going on in both normal and abnormal conditions, scientists will be able to better diagnose disease and come up with new cures. It may even help power science for the next 450 years.
2. Combining Crypto-Anchors And Blockchain To Secure The World’s Supply Chains
In 1999, a young assistant brand manager at Procter and Gamble named Kevin Ashton realized that an obscure technology that used radio waves to power small, passive devices could revolutionize supply chain management. Today, RFID chips are everywhere, helping us to track and manage inventory across the globe.
However, although RFID helps to increase efficiency, it can do little about security, which is has become a massive problem for two reasons. First, counterfeiting costs businesses hundreds of billions dollars a year and helps finance criminal gangs and terrorists. Second, in the case of things like food and medicine, insecure supply chains are a major health hazard.
IBM sees a solution to the problem of counterfeit goods through combining tamper-proof digital fingerprints it calls “crypto-anchors” with blockchain technology to secure supply chains at a cost low enough to spur wide adoption. It is also unveiling the world’s smallest computer this week. Costing less than 10 cents to make and smaller than a grain of salt, it can be used to analyze products such as wines and medicine and verify provenance.
As a first step to securing supply chains, the company has formed a joint venture with the global logistics firm Maersk to implement blockchain technology throughout the world. For businesses, this will mean a supply chain that is more efficient, reliable and secure.
3. Super-Secure Lattice Cryptography
2017 was a great year for cyber attackers, but not so good for the the rest of us. Major breaches at Equifax, Uber and in a database containing over 200 million voter records were just the highlights of a banner year for hackers. These attacks highlight a critical vulnerability for both our financial systems and the integrity of our democracy.
Part of the problem is that conventional cryptography methods are designed to be incredibly cumbersome — even for supercomputers — so information needs to be decrypted in order to be analyzed. When hackers get into a system, they can often take whatever they want.
IBM is working on a form of security called lattice-based cryptography. Unlike traditional methods, which use impossibly large prime numbers as a key, these use complex algebraic problems called “lattices” to secure information — even from quantum computers many years from now. A related technology, called Fully Homomorphic Encryption (FHE) will allow systems to analyze data without decryption.
So, for example, a hacker breaking into a customer or voter database will free to calculate how much tax is owed on a purchase or how many Millennials reliably vote for Democrats, but identities will remain secret. Businesses may also be able to analyze data they never could before, because they won’t actually need to be given decrypted access.
4. Rooting Out Data Bias For Reliable And Trustworthy Artificial Intelligence
As Cathy O’Neil explains in Weapons of Math Destruction data bias has become a massive problem. One famous example of this kind of bias is Microsoft Tay, an AI powered agent that was let loose on Twitter. Exposed to Internet trolls, it was transformed from a friendly and casual bot (“humans are super cool”) to downright scary, (“Hitler was right and I hate Jews”).
Even more serious are the real world impacts of data bias. Today’s algorithms often determine what college we attend, if we get hired for a job and even who goes to prison and for how long. However, these systems are often “black boxes” whose judgments are rarely questioned. They just show up on a computer screen and fates are determined.
IBM is now working with MIT to embed human values and principles in autonomic decision-making and to devise methods to test, audit and prevent bias in the data that AI systems use to augment human judgments. With numerous questions being raised about the ethics of AI, the ability to oversee the algorithms that affect our lives is becoming essential.
5. Quantum Computing Goes Mainstream
Quantum computing is a technology that IBM has been working on for decades. Still, until relatively recently, it was mostly a science project, with little practical value or obvious commercial applications. Today, however, the field is advancing quickly and many firms, including Google, Microsoft and Intel, are investing heavily into the technology.
Over the next five years the company sees quantum computing becoming a mainstream technology and has unveiled several initiatives to help make that happen.
- Q Experience is a real working quantum computer that anyone who wants to can access through the cloud and learn to work with the new technology
- QISkit, is a set of tools that helps people program quantum computers using the popular Python language
- Q Network a group of organizations exploring practical application of quantum computers.
The effort to design new computing architectures, which includes neuromorphic chips as well as quantum computers, is becoming increasingly important as Moore’s law winds down and theoretical limits soon make further advances in transistor-based computers impossible.
Like the other initiatives in IBM’s 5 for 5, taking quantum computer mainstream within five years stretches the bounds of the possible, but that’s very much the point. Identifying a meaningful problem and setting a goal to solve it are the first steps in transforming an idea into reality.