Tech Trends 2017 - Exponentials Watch list: Science And Technology Innovations On The Horizon.

How long before you need to consider how to incorporate nanotechnologies, energy systems, biotechnology, and quantum technologies into the business? Not long: Though these emerging forces may be distant on the horizon, they’ll disrupt industries, strategies, and business models soon enough.

Though business applications for nanotechnologies, energy systems, biotechnology, and quantum technologies may seem light-years away, in reality they are approaching rapidly. In the next three to five years, expect to see business use cases emerge and pioneering deployments accelerate around these once-futuristic technologies. With this in mind, increasing numbers of CIOs, CTOs, and business strategists are already taking exploratory steps with these and other exponential technologies. They are sensing and scanning disruptive forces and putting in place deliberate, disciplined innovation responses. These leaders understand that waiting for exponentials to manifest as mature technology trends before taking action may be waiting too long.


View Tech Trends 2017

Create and download a custom PDF

Listen to the podcast

Watch the Exponential technology video

Watch the Tech Trends video

Unlike other trends examined in this report that demonstrate clear business impact in the next 18 to 24 months, the exponentials we are discussing appear a bit smaller on the horizon. These are emerging technology forces that will likely manifest in a horizon 3 to 5 timeframe—between 24 and 60 months. But when they manifest, the speed with which they impact markets will likely grow exponentially.

For businesses, exponentials represent unprecedented opportunities as well as existential threats. As such, an analysis of exponential forces is a time-honored part of our annual discussion of emerging technologies. In our Tech Trends 2014 report, for example, we collaborated with faculty at Singularity University, a leading research institution, to explore artificial intelligence, robotics, cybersecurity, and additive manufacturing. At that time, these emerging technologies were outpacing Moore’s law—that is, their performance relative to cost (and size) was more than doubling every 12 to 18 months.1 Just a few years later, we see these same technologies are disrupting industries, business models, and strategies.

In this year’s report, we test specific aspects of four exponential forces that are being propelled by significant investment and research across the public and private sectors: nano-engineered materials, energy storage, synthetic biology, and quantum optimization. For each, we provide a high-level introduction—a snapshot of what it is, where it comes from, and where it’s going. 

In each force, we seek to identify precursor uses or breadcrumbs of adoption for early application to business uses. Some if not all of these exponentials may disrupt industries in 24 months or more, but there can be competitive opportunities for early adoption. At a minimum, executives can begin contemplating how their organizations can embrace exponentials to drive innovation.

Don’t let yourself be lulled into inaction. The time to prepare is now.


My Take


We live in tumultuous times. As we have seen during the past year, political landscapes are shifting beneath our feet. News, signals, and random information come at us in torrents. At the same time, exponential technologies such as synthetic biology, advanced energy storage, nanotechnology, and quantum computing, among others, are poised to disrupt every part of our lives, every business model and market, every society. Eventually, they may even redefine what it means to be human.

What are we to make of all this? Change is happening all around us at a pace that will only accelerate. Particularly in the realm of exponentials, when you see seemingly radical innovations emerging, we often experience it emotionally. We feel anxious about change. Our first reaction is often to cling to something that feels stable—the past.

At Singularity University, we are trying to understand where all this change is heading. Contrary to what some may see, we see a future that is hopeful and full of historic possibility. By leveraging exponentials, we could have a future in which cancer no longer afflicts our families. Everyone—even the most pessimistic—can agree that this is a desirable goal. This is the lens through which we should all view exponentials. By harnessing the power of quantum optimization, nano-engineered materials, or synthetic biology to eliminate scarcity and uplift humans, we can tackle problems that have traditionally seemed so daunting that we’ve never imagined a world without them. Exponentials are an opportunity driver, not something to fear.

As use cases for exponentials emerge and technologies mature over the next three to five years, it will not be enough for the technology, science, academia, and business sectors to focus solely on their own goals. Collectively, we must also help build understanding throughout society of what these technologies are and where they can take us.

The future is already here. The world around is changing every day, and will continue to do so. Unless we equip ourselves with a new vision of the future and tools to navigate it, we will wake up every morning and be surprised. At Singularity University, we believe a better path is to come together to build an awareness of where we are going and, with some rigor, talk about how exponentials can help us all build a future of abundance.


Force: Nanotechnology


The word nano is often used to describe something unusually small. For example, Tata Motors developed a compact automobile primarily for the Indian market it calls the Nano.2 But beyond its diminutive descriptive usage in product marketing, nanohas a much more precise definition. Using one meter as a measuring stick, a nanometer is defined as one billionth of a meter (that’s 1/1,000,000,000). If this is hard to imagine, try using a single carbon atom as a measuring stick. A single nanometer is about the size of three carbon atoms placed side by side. In comparison, a single human hair is 80,000 to 100,000 nanometers wide.

Nano-manufacturing—the process of making things at nano-scale—represents an important emerging capability. To create things smaller than 10 nanometers, we typically turn to advanced chemistry; to some degree, one can attribute the pharmaceutical industry’s achievements to its ability to create precise molecules at these length scales. More traditional manufacturing technologies, such as machining, can get down to features that are close to the size of a human hair, but that leaves a thousand-fold gap in length scales from making molecules to machining. Nano-manufacturing is a set of technologies and techniques that enables making things at this range of size.

The drive to develop nano-manufacturing capabilities comes from a variety of different challenges and opportunities that emerge at this scale. Perhaps the most visible driver has been the demand for cheaper and higher-performing computers. Moore’s law, the periodic doubling of transistor density—the number of transistors that can fit on a chip—is a direct result of the development of machines that can create ever-finer patterns of semiconductors. In 2014, Intel shipped chips with 14-nanometer resolution.

The smallest features on these chips were spanned by fewer than 50 silicon atoms.

Medicine also drives demand for nano-manufacturing. Life emerges at nano-scale through a complex set of molecular “machines” that copy DNA and synthesize proteins; the molecules that carry out these processes are 10–100 nanometers in size. Nano-manufacturing could be used to make objects that either mimic this process—for example, to manufacture proteins that can then be used as drugs—or inhibit it directly to treat disease.

A third area driving the development of nano-manufacturing is the role of nanostructures on surfaces, in the form of coatings, lubricants, and adhesives. Nanostructures can prevent water from wetting a surface, making water-resistant fabrics and mirrors and windows that don’t fog. In a similar way, nanostructured surfaces can prevent the formation of ice—for example, on the wings of an airplane, making it much safer to fly and eliminating the need for the repeated application of liquid de-icing agents.

An important business application today addresses wear and friction. These physical factors, as well as adhesion, are a product of the interaction between surfaces at the nano-scale.


Reality check

So what are some current examples of nano-engineered products that are likely to impact businesses today or in the near future?

In addition to integrated circuits, examples of products made through nano-manufacturing include nanoparticles of silver that kill bacteria and are integrated into clothing and medical devices to prevent infection; nanoparticles of titanium that block UV light and when integrated into a lotion or spray and applied to the skin prevent sunburn; and nanoparticles of pigment that make brighter paints and coatings that prevent corrosion.

Manufacturing asperities—imperfections remaining on surfaces after modern milling and machining techniques—are commonly at micron scale, but lubricant molecules are still larger than that. By changing the surface features at nano-scale, or by introducing nanostructured materials between surfaces, friction can be reduced to provide super-lubrication or can be enhanced to provide super-adhesion.

NanoMech makes a nanostructured lubricant designed to mitigate these effects for critical mechanical components such as gears, bearings, valves, and chassis points. It is designed to address issues like performance under extreme pressure, anti-wear, anti-friction, corrosion protection, and extreme temperature stability in order to enable extension of service life and reduce maintenance cost of mechanical systems. Beyond the fact that the lubricant or coating is engineered and manufactured for specific business use cases, rather than inventing wholly new ways to make nanostructured materials, the company uses off-the-shelf manufacturing technology and includes both top-down fabrication and bottom-up assembly in its process.

However science-fiction-like nanotechnology’s capabilities might sound, applications are becoming evident today. For example, NanoMech’s AtomOil and AtomLube are self-replenishing, which means as friction rubs the nano-manufactured lubricant molecules apart, additional molecules are drawn into the interface. Applications may include equipment for oil and gas production; engines and other machines used in the marine, agriculture, and mining sectors; and macro-manufacturing techniques, including die casting and machining.


My Take

At NanoMech, we consider ourselves pioneers in nano-mechanics. We design and engineer products at nano-scale while continuing to produce them at macro scale. Our company slogan is, “We make atoms work harder.”

In the world of industrial lubricants, there’s an old saying: The best maintenance is low maintenance. Nano-engineered lubricants and coatings help our clients in the manufacturing, energy, automotive, and defense sectors increase mechanical performance, efficiency, and durability while reducing downtime. These designs also support sustainability: At nano-scale, we can eliminate materials traditionally used in lubricants such as chrome and petroleum products.

If all of the problems in mechanical systems and manufacturing are at nano-scale, then it follows that the solutions must be at nano-scale too. Our solutions are made possible by a powerful mechanical systems lens through which we view both present needs and future opportunities. Consider the potential market for these products: By some estimates, each day every human on earth uses an average of 10 machines. As the population grows, so will the number of machines in operation, all requiring products like ours.

The ability to engineer at nano-scale is helping us meet this demand. Over the course of six years, NanoMech has grown from one product offering to 80. Moreover, we’ve been able to drive these levels of growth using off-the-shelf components. As a practice, we take machines designed and utilized for other purposes and adapt them for use in making nano-engineered and nano-manufactured products. We occasionally see companies approach nano-engineering by building the machines they need from the ground up. Working in nano-scale doesn’t require that you reinvent the wheel; doing so is, in my opinion, a waste of time and money.

Expect to see nanotechnology take off in the next two to three years with the expansion of robotics, which represents an intersection of the mechanical and electronic worlds. Longer term, we will likely see a proliferation of nanotechnology solutions in niche markets. For example, the pharmaceutical industry is already engineering new molecules at nano-scale. And more will likely follow. As we journey into the future, materials science can be that catalyst for realizing new possibilities.


Force: Energy systems

As the world addresses its reliance on carbon-based energy, the sun is shining brightly and the wind is blowing at our backs. In 2014, wind and solar sources accounted for roughly 1 percent of energy consumed globally—only a tiny part of overall consumption but one that is growing rapidly.11 Wind capacity has doubled every four years and solar every two for the past 15 years.12 And with generation costs continuing to fall, this exponential trend is expected to continue, with these renewable sources projected to provide two-thirds of new generation capacity additions over the next 25 years.

However, the achievements of renewable energy sources also herald a challenge that ultimately may limit their further adoption.

Unlike many traditional modes of electricity generation, wind and solar are at the mercy of nature’s vagaries—without wind or sunshine, no power is produced. There are ways to alleviate this challenge. For example, because wind production is typically greater at night than in the daylight hours,14 there may be opportunities to deploy wind and solar capabilities synergistically.

Yet even if we embrace this approach, a fundamental challenge persists: aligning energy production with energy consumption.

The challenge of storing energy on a massive scale until it is needed by consumers is hardly new. One solution, pumped hydroelectric storage, has applications dating back to the 19th century. In a pumped hydro storage facility, water is pumped uphill when electricity is abundant (and cheap) and then released to flow downhill to power generation turbines when electricity is scarce (and valuable). By some measures, the pumped hydro approach is wildly effective: This technology represents an estimated 99 percent of bulk energy storage capacity worldwide.

But while pumped storage is a useful and relatively efficient storage mechanism, it is constrained by access to water and reservoirs as well as by topography. Therefore, the dominance of pumped hydro speaks less to its advantages than to the historic absence of credible alternatives targeted toward centralized large-scale storage on the electric grid. And this is a real problem: With the massive expansion of power sources such as wind and solar, and the increasing decentralization of energy production, we will need more energy storage capacity overall as well as the ability to deploy it flexibly in different geographies, unit sizes, and industrial and consumer applications.


Reality check

The good news: The last decade has seen an explosion of new and improving storage technologies emerge, including more efficient batteries, compressed air, and molten salt. Utilities are deploying these approaches at or near sources of generation.

The following examples highlight some notable developments:

Favored with sunshine but facing high costs to import fuel, the Hawaiian island of Kauai is a leading consumer of renewable energy. On sunny days, solar contributes 70 percent of energy generation, which decreases with the arrival of cloud cover. What’s more, peak energy demand is in the evening. To close the gap, the Kauai Island Utility Cooperative is working with power systems provider SolarCity to build a new solar farm and storage facility, with energy stored in lithium ion batteries supplied by automaker and energy storage company Tesla. The plant will generate capacity during the day, store the entire amount of energy generated, and then release it during the high-demand evening hours.

In Lake Ontario, Canadian start-up Hydrostor has launched a pilot program using compressed air. In this approach, air is compressed and pumped into a series of underwater balloons. When energy is required, the air is released, expanded, and used to create electricity.

In commercial operation since 2015, SolarReserve’s 110 megawatt-hours Crescent Dunes Solar Energy Project in the Nevada desert has deployed a solar thermal system in which a large field of mirrors concentrates the sun’s rays to heat molten salt. The hot salt is then stored at a temperature of over 1,000 degrees Fahrenheit in a 140-foot-diameter insulated storage tank until needed. At that point, the hot salt is used to create steam to power turbines, just like a conventional fossil or nuclear plant. Using this method, each day the Crescent Dunes facility can store up to 1,100 MWh of energy generated by the concentrating solar array within salt.

Perhaps more significantly, energy storage technologies may soon offer more options for the end consumer, allowing consumers to store power at or near the point of consumption. The following examples highlight some notable developments:

In Japan, the government has set a goal for all newly constructed public buildings to be able to generate all their energy needs by 2020, with the same zero-energy standard for private residences by 2030, providing a strong incentive for development of residential-scale storage.

In the United States, several utilities are offering energy storage products to customers, and with the growth in solar panel installations, energy storage innovators such as Tesla, Orison, and SimpliPhi Power are marketing their battery technologies directly to end consumers.

While the cornerstone of disruption may be exponential improvements in both energy generation and storage, the keystone may well be a coming business-model revolution, as new models supplement the traditional model of centralized power generation and one-way distribution to multiple distributed points of consumption. With such a broad and growing set of emerging energy storage technologies—each with different performance and economic characteristics—business and retail consumer adoption patterns will likely remain difficult to predict for the foreseeable future. But regardless of which technologies emerge as leaders, both consumers and producers of energy will be presented with more choice and more complexity, transforming the traditional supply, demand, and economic relationships between many parties.

If your business consumes large amounts of energy, what is your innovation response to this disruption force?


My Take


One of the key global megatrends that researchers study at Discovery Park is growing energy demand, which will increase by 40 to 60 percent by 2050, according to the World Energy Council. Transportation will likely account for a considerable part of that growing demand. My view has always been that we are nowhere near developing optimal energy storage technologies that can be used in cars, buses, trains, and other modes of transportation. Though we’re seeing expanding use of lithium ion batteries today, there is only so much that can be done to improve battery performance and reduce costs through mass manufacturing.

Moreover, there is too much innovation happening in this space to believe that lithium ion is going to be the only answer to our energy storage needs. Though development in this area is currently in the initial research stage, it is taking place globally.

I recently bought an electric car. This model is advertised as having a range of 85 miles, which is sufficient for driving around town. I’ve noticed, however, that if the temperature outside grows warmer or colder, this car can lose up to 30 percent of its range. This is the reality of lithium ion batteries in cars—the range they offer may be less than we expect.

But that is in the near term. There is an exponential trend in energy storage; there are technologies currently under development that will likely provide much higher energy density. Also, we may soon see different technologies emerge that can be used on the grid, rather than for transportation. The characteristics of volume and weight are not so important for the grid—and storage doesn’t have to be a battery pack. Redox batteries, which is a combination flow battery and two-electrolyte system, are among the more promising technologies currently in play. As with many innovations, there can be materials challenges, but in principle redox batteries can be cheaper than lithium ion, which could make it better suited for use in the grid—either centralized or distributed around points of generation and consumption.

While I do think advances in energy storage technologies may, in the coming years, deliver a breakthrough that makes $100 per kwH for grid storage possible, for the foreseeable future the transportation sector will likely drive innovation. There’s more demand coming from transportation than from the grid, where renewable energy generation continues to grow. I also believe the public sector may have a critical role to play in this space, specifically in reducing some risks associated with research and development.


Force: Biotechnology

The Convention on Biological Diversity defines biotechnology broadly as any technological application that uses biological systems, living organisms, or derivatives thereof to make or modify products or processes for specific use.20 This definition makes clear that biotech’s potential disruptive impact is not limited to big players in health care and agriculture. Indeed, as it ramps up in exponential impact, biotech has relevance for industrial products, energy and natural resources, high tech, and other industries.

This year we are focusing on one area of biotech—synthetic biology—and an imminent precursor technology for gene editing and repair. Asif Dhar, principal and chief medical informatics officer at Deloitte Consulting LLP, succinctly describes synthetic biology as “bio-engineering a thing that then creates a substance.” An example might be engineering algae to produce alcohols for fuels, polymers, or building materials such as paints and coatings.21 Much of the progress in synthetic biology is not about editing basic cell behavior but about adding code to the cell to make it respond differently to signals in a manner that cell will accept. This is a targeted redirection of a cell’s intent, which requires a deep understanding of the specific cell to do with confidence.

The implications reach beyond the science and into business models across industries. Genetic diseases are relatively rare but usually severe—lifelong care or long-term therapeutics can be required. Chronic care today often seeks to push physiology back into place with ongoing pharmaceutical regimens. Synthetic biology could conceivably offer one-time therapy with no need to revisit treatment. In such a case, what is the best approach to payment and reimbursement when a lifetime of benefit comes from one treatment?

Understandably, there is controversy surrounding medical applications of biotechnology. Is environmental engineering possible? Will some part of society determine that germ-line or in utero engineering is acceptable? Will people start clamoring for other changes in their genome? Will ethics vary by country or culture? Those are big questions.

Regardless, the prospect of permanently correcting inherited genetic disorders such as cystic fibrosis, sickle cell anemia, and certain cancers can incite optimism for those suffering from the conditions and, potentially, fear for those imagining manipulation of the human genome in malicious ways. Regulatory and ethical debates have been just as vibrant as the scientific research, and these issues are far from settled. Nevertheless, understanding the medical, industrial, and synthetic biology applications of this disruption force is an important step toward sensing important business considerations that may shape our future.


Reality check

Currently, there is a flurry of synthetic biology inventions, patents, and IPOs, with one area in particular crackling with activity: gene editing with CRISPR.

CRISPR—clustered regularly interspaced short palindromic repeats—is a genomic editing technology. Rather than focusing on creating new capabilities or behaviors, the CRISPR enzyme acts as molecular scissors, cutting the DNA at the specified point to allow editing and correction of genetic code to work as originally intended.22 Tom Barnes, chief scientific officer of Intellia Therapeutics Inc., refers to the process of genome editing as “correcting typos in the Book of Life.” Biologists have had the tools to edit the genome, but CRISPR represents a more efficient, accurate, and malleable technique than the other tools at their disposal.

To understand this process more clearly, imagine a factory that produces a single component for a large, complex machine. This component is but one of the machine’s numerous parts, but it is critical nonetheless. Yet, due to a small error in the manufacturing software, this component tends to fail soon after the machine begins operation. Luckily, the company identifies the error and patches the software, and the component becomes a reliable part of the greater machine.

A human cell is like a tiny manufacturing facility, with DNA acting as software instructions for cellular function. The human cell possesses several checks and balances to ensure its genomic integrity, and while it is efficient in its task, errors do sometimes happen.23 In some cases, environmental damage or genetic inheritance causes errors in these instructions. Before CRISPR, there was no inexpensive, efficient, and precise synthetic mechanism for identifying a target gene repair location and manipulating the code toward a positive therapeutic outcome—that is, no reliable synthetic method of “patching” errors in the human genome across the broad array of known genetic defects that result in disease or chronic conditions.

While advances in synthetic biology may make it possible to turn any living system into one we can manipulate genetically, few systems are sufficiently well understood today to be amenable to that manipulation. The tools and tricks currently used to manipulate fruit flies, for example, were developed over the course of 100 years. There is currently little comparable knowledge or experience base that can be used to similarly manipulate other potentially valuable cell lines.

Current CRISPR use cases focus on repairing cells back to the intended function. That allows a less complex starting point and potentially a less controversial set of capabilities.

Similarly, the agriculture industry is using CRISPR techniques to move faster than selective breeding and hybridization could in the past. For example, button mushrooms engineered with CRISPR do not go brown with handling and shipping.

Today CRISPR is ready to advance from a bench tool into therapeutics. Academics are collaborating with business to address the regulation, scale, and rigors of development. Developments and applications of CRISPR technology will continue to be reported and debated as they advance, but it’s not too soon for businesses to begin considering impacts.26 With the National Institutes of Health projecting cancer costs to hit $158 billion by 2020,27 CRISPR’s potential as a treatment for cancer offers hope for health care consumers and providers buckling under the increased cost and complexity of new treatments.

Pharmaceutical, oil and gas, and chemicals manufacturers are carefully following the potential of synthetic biology to engineer organisms to produce complex chemicals and other compounds.


My Take

Since the human genome was first mapped in 2000, scientists have identified the individual genes responsible for 4,300 genetic disorders, many of which can drastically negatively affect an individual’s quality of life or reduce life expectancy.

Imagine a future in which these genetic disorders—chronic conditions like cystic fibrosis and hemophilia, which have afflicted humans since time immemorial—no longer burden us. At Intellia Therapeutics, we are developing therapies based on CRISPR technology that we believe will make this vision a reality.

At Intellia, my colleagues and I believe CRISPR has the potential to transform medicine by permanently editing disease-associated genes in the human body with a single treatment course.

CRISPR enables genome editing—the precise and targeted modification of the genetic material of cells. As an exponential technology, its disruptive potential is profound. In the pharmaceutical industry we have seen considerable work during the last 20 years to identify genes and gene variants that directly or indirectly drive disease. In agriculture, scientists are exploring opportunities to make crops more resistant to fungi and bacteria, and livestock more resistant to disease. And in industry and academia, scientists are gearing up to use CRISPR to edit cells in humans to fight a range of diseases.

Like ripples spreading in a pond, CRISPR will likely disrupt the health care, pharmaceutical, agricultural, and other industries for many years. At the center is the effect on those suffering from genetic disease. Yet, further out a number of ethical questions arise, for example, in editing the germline (sperm and egg DNA to permanently affect future generations), and editing and releasing organisms in the environment to shift ecological balances (gene drives). These questions need careful consideration by society at large, and warrant being addressed. At Intellia, we have chosen not to participate in germline editing and are focused on somatic cells, where we can directly target genetic diseases today.

Over the next five years, we will see different clinical efforts using genome editing technologies. We will also likely see CRISPR-driven advances in drug development, with useful therapies following shortly thereafter.

Beyond that, it is difficult to predict what the science of genome editing will look like in 2027 or beyond. Right now, we have the tool to go inside a cell and change its DNA. The challenge we encounter is getting inside the right cell. As we improve that process, we exponentially expand CRISPR’s possibilities. And as we expand those possibilities, we will inevitably encounter ethical questions about how this technology can and should be deployed. At Intellia, we are focusing on all the good we can potentially do for people suffering from genetic diseases. Think about it: In the future, we may no longer have to take the cards we’re dealt—we can swap some.


Force: Quantum technology

Quantum technology can be defined broadly as engineering that exploits properties of quantum mechanics into practical applications in computing, sensors, cryptography, and simulation.

Quantum mechanics, a branch of physics dealing with the nature of matter at an atomic or sub-atomic level—can be counterintuitive. Particles behave like waves, experience quantum uncertainty, and show the non-local entanglement phenomena that Einstein famously called “spooky action at a distance.” Given that most quantum phenomena are confined to the scale of atoms and fundamental particles, nontraditional materials and methods are required to explore and exploit them.

As a result, efforts to harness quantum technology for computing are hardware-driven, using exotic materials, and focused on the goal of achieving durable quantum states that are programmable—that is, pursuing a general-purpose quantum computer.

Difficult engineering hurdles remain. Nonetheless, there is an active race under way to achieve a state of “quantum supremacy” in which a provable quantum computer surpasses the combined problem-solving capability of the world’s current supercomputers—at least for a certain class of problem.

Currently, the Sunway TaihuLight supercomputer in Wuxi, China, can run 10.6 million cores comprising 40,960 nodes, and can perform 93 peta floating-point operations per second (FLOPS). That’s roughly 10,000 times faster than a high-end GPU today.

By contrast, a single quantum gate chip with around 60 quantum bits (qubits) would, theoretically, be more powerful than the TaihuLight computer.

Any companies that win the race for “quantum supremacy” will harness some key quantum effects into their architectures, including superposition, tunneling, and entanglement.

Superposition allows a quantum bit to hold zero and one values simultaneously, and in quantum tunneling, particles behave like waves to cross certain energy states. These unintuitive facts allow quantum computers to solve complex discrete combinatorial problems that are intractable for classic computers in practical timeframes. For example, machine learning leverages pattern recognition—comparing many instances of large data sets to find the learning model that effectively describes that data.

Applying superposition and tunneling allows handling many more patterns in many more permutations much more quickly than a classic computer. One key side effect is cracking current data encryption and protection schemes. 

Fortunately, the entanglement effect supports quantum cryptography, using the “shared noise” of entanglement to empower a one-time pad. In quantum entanglement, physically distant qubits are related such that measurements of one can depend on the properties of the other. Measuring either member of an entangled pair destroys the shared entanglement. This creates a business use for senders or receivers to more easily detect “line tapping” in digital communications.

Large-scale quantum computing, whenever it occurs, could help address real-world business and governmental challenges. Peter Diamandis offers examples from several disparate disciplines. Toward personalized medicine, quantum computers could model drug interactions for all 20,000-plus proteins encoded in the human genome. In climate science, quantum-enabled simulation might unlock new insights into human ecological impact. Finally, quantum simulations seem to better model many real-world systems such as photosynthesis. Addressing such processes with quantum computers may lead to biomimetic advances and discoveries across many industries and use cases.33


Reality check

As companies wait for a commercial gate model quantum machine with lots of qubits and high coherence—that is, a general-purpose quantum computer—they can experiment with certain applications using quantum simulation and quantum emulation. These approaches are in use today and can show both the path to and the potential of full quantum GPC (general purpose computing).

Quantum simulation implements the exact quantum physics (as we know it today) with the hardware and tools we have today. That is, quantum simulation directly mimics the operations that a quantum computer performs, using classic computing to understand the exact effects of every quantum gate in the simulated machine.

Quantum emulation targets quantum-advantaged processes without the exact physics, using mathematical shortcuts that don't compromise the results. That is, quantum emulation is only required to return the same result as a perfect quantum computation would. Instead of compiling an algorithm for specific quantum hardware, fast classical shortcuts may be executed by the emulator. Depending on the level of abstraction for the emulation, this may improve both speed and total number of operations.

As an example, Kyndi leverages quantum emulation to handle the combinatorial complexity of inferencing complex data, putting the result to work as part of a broader machine-intelligence approach in places where the volume of data overwhelms human experts. The intent is not to replace the expert—rather, it is to automate the routine parts of large-scale analysis and free humans to focus on high value add. One Kyndi proof of concept delivered analysis in seven hours of processing that the client estimated would have taken a full year using human analysts alone.

Both quantum simulation and quantum emulation approaches are backed by formal proof theory—math results from theoretical computer scientists and physicists who have done the work to show what quantum computations can be done on classic computer architectures. Those theories, though, generally do not specify algorithm design or emulation abstraction.

In the slightly longer term, quantum optimization solutions—such as the tunneling or annealing mentioned earlier—don’t provide full general-purpose compute, but they do address hard discrete combinatorial optimization problems, such as finding the shortest, fastest, cheapest, or most efficient way of doing a given task. Familiar examples include airline scheduling, Monte Carlo simulation, and web search.35More current uses include image recognition, machine learning and deep learning pattern-based processing, and intelligence systems algorithmic transparency (in which an algorithm can explain itself and how it came to its conclusions).

Quantum optimization addresses limitations with both rules-based and traditional case-based reasoning in which multiple case examples are used to show levels of relationships through a commonality of characteristics. A practical problem for quantum optimization would be not only recognizing the objects in a photo but also making inferences based on those objects—for example, detecting a dog, a ball, and a person in a photo and then inferring that the group is going to play ball together. This type of “frame problem" is combinatorically large in classic rules engine approaches.

As for quantum optimization’s longer-term future, the ability to harness computing power at a scale that, until recently, seemed unimaginable has profound disruptive implications for both the private and public sectors, as well as for society as a whole. Today, we use statistical methods to mine patterns, insights, and correlations from big data. Yet for small data that flows in high-velocity streams with low repeat rates, these statistical methods don’t apply; only the human brain can identify and analyze such weak signals and, more importantly, understand causation—the reason why. In the coming years, expect quantum computation to break the human monopoly in this area, and to become one of the most powerful models of probabilistic reasoning available.


My Take

At Kyndi, we are working to change how society’s hardest problems can be solved when human creativity and resourcefulness are complemented by smarter machines. Our technology draws from the quantum sciences to transform text in any language into a crystalline structure that provides answers to many questions.

Language is used in many different ways, and words can have multiple meanings. Normal computers struggle with reasoning in the face of such complexity. We solve this with a practical data representation inspired by methods of quantum computing. Our mapping technology automatically learns the makeup of any language and how it’s used in a given field. We store those maps in crystalline structures as graphs showing how things are related to each other. We can efficiently store many maps with complex interrelationships and yet still recall them quickly using relatively little classic computing power. We then put those structures to work via machine learning.

In a world increasingly cluttered by big data, dark data, and cacophonies of signals and seemingly random information, the growing need for technology that can analyze and draw plausible, realistic, and timely inferences from complexity drives our efforts.

Humans intuit at high speeds; normal computers, on the other hand, do not. Quantum computation—and algorithms that emulate that computation—approach solving high-complexity learning and inference within the same time scales as traditional rules-based systems do on smaller problem sets. We have tackled this challenge using algorithms that emulate quantum computation. Emulation is not technically quantum computation, but it performs analogously with currently available computers.

This approach makes it possible to solve problems of super-exponential complexity that would stymie traditional rules-based computing systems. For example, a cancer patient works with his doctor to develop a treatment plan that may include chemotherapy, radiation, surgery, and dietary regimes. In this case, there are 24 possible combinations of the therapy statically planned at the start: One patient may start with diet, another surgery, etc. But if the treatment regimen allows for re-optimization of the plan at every step, the number of possibilities jumps from 24 choices to 24 factorial choices. That’s 6.2x10^23 potential combinations!

Kyndi works on resolving other similarly challenging problems in global security today, such as competitive intelligence—scanning the horizon for surprises and understanding emerging science and technology patterns. The human brain can intuit around this kind of complexity; traditional computers cannot. Yet there are situations involving big data in which a chain of thinking quickly creates huge combinatory explosions that overwhelm even the most advanced thinkers.

In the future, quantum computing will likely far outperform humans in the arenas of probabilistic reasoning and inference. Until that day arrives, you don’t have to be a quantum purist. Quantum simulation can help you understand the new concepts, and emulation can help you scale the quantum concepts to solve some important types of problems. For example, whereas it might take a human months to read and understand 500 articles on a given subject, Kyndi’s quantum emulation systems can analyze those 500 articles in seconds and narrow the reading list down to six that explain the topic thoroughly. Quantum’s day will come. Until then, think of simulation and emulation as two separate interim steps—learning and applying—as well worth exploring.


Cyber implications

It may be several years before viable, mainstream business use cases for synthetic biology, advanced energy storage, quantum computing, and nanotechnology emerge. But even in these early days of innovation and exploration, certain risk and security considerations surrounding several of these exponential technologies are already coming into view.

For example:

Nanotechnology: The health care sector is developing many groundbreaking uses for nanotech devices, from microscopic tools that surgeons can use to repair damaged tissue, to synthetic molecular structures that form the basis for tissue regeneration.

Yet like medical devices, nanotech carries with it significant compliance risk. Moreover, the microscopic size of these innovations

makes them nearly impossible to secure to the same degree one would other technologies. In some cases, nano-related risk will likely need to be managed at nano-scale.

Energy storage: Batteries and grid-storage technologies do not, in and of themselves, carry significant levels of risk. However, the digital components used to control the flow of electricity, and the charge and discharge of batteries, do. As storage components become denser, more compact, and weigh less, new digital interfaces and energy management tools will emerge, thus requiring new approaches for securing them.

Synthetic biology: At the crossroads of biology and engineering, synthetic biology stands poised to disrupt agriculture, medicine, pharmaceuticals, and other industries that deal with natural biological systems. Yet its seemingly limitless potential will be bounded by formidable regulation that will, in turn, raise its compliance risk profile.

Quantum computing: With the kind of algorithms and data models that quantum computing can support, predictive risk modeling may become an even more valuable component of risk management. The difference between modeling with a few hundred data attributes, as one might today, and running the same models with 20,000 or more attributes represents a potentially game-changing leap in capacity, detail, and insight. Due to the growing complexity of managing cyber risk, platforms such as quantum will likely be essential cyber components in the future.

As we begin thinking about exponential technologies and their disruptive potential—however distant that may seem—it is important to consider not only how they might be harnessed for business purposes but also the potential risks and security considerations they could introduce upon deployment. Will they make your ecosystems more vulnerable? Will they expose your organizations to additional financial compliance or reputation risk? Or, as in the case of quantum computing, might they turbocharge your existing approaches to security by revolutionizing encryption, predictive modeling, and data analysis?

Though we may not be able to answer these questions with certainty today, we do know that by adopting a “risk first” approach to design and development now, CIOs will be putting in place the foundational building blocks needed to explore and leverage exponential technologies to their fullest potential.


Where do you start?

While the full potential of the four exponentials examined in this report may be several years in the future, there are relevan capabilities and applications emerging now. If you wait three years before thinking seriously about them, your first non-accidentalyield may likely be three to five years beyond that. Because these forces are developing at an atypical, nonlinear pace, the longer you wait to begin exploring them, the further your company may fall behind.

As you embark on your exponentials journey, consider a programmatic lifecycle approach involving the following steps:

Sensing and research: To begin exploring exponential forces and their potential, consider, as a first step, building hypotheses based on sensing and research. Identify a force—nanotechnology, for example—and hypothesize its impact on your products, your production methods, and your competitive environment in early and mid-stage emergence. Then perform sufficient research around that hypothesis, using thresholds or trigger levels to increase or decrease the activity and investment over time. It is important to note that sensing and research are not R&D—they are preliminary steps in what will be a longer effort to determine an exponential force’s potential for your business.

Exploration: Through sensing and research, you have identified a few exponentials that look promising. At this point, you can begin exploring the “state of the possible” for each by looking at how others are approaching these forces, and determining if any of these approaches could apply broadly to your industry. Then convene around the “state of the practical:” Specifically, could these same approaches impact or benefit your business? If so, you can begin developing use cases for evaluating the “state of the valuable” in the experimentation phase.

Experimentation: The move from exploration to experimentation involves prioritizing business cases and building initial prototypes, doing in-the-workplace studies, and putting them into use. When the value proposition of the experiment meets the expectations set forth in your business case, then you can consider investing by moving into incubation. Be cautious, however, about moving too quickly from incubation to full production. Even with a solid business case and encouraging experiments with containable circumstances and uses, at this stage your product is not proven out at scale. You will likely need an incubator that has full scaling ability to carry out the level of enhancement, testing, and fixes needed before putting this product out into the world.

Be programmatic: Taking any product—but particularly one grounded in exponential forces—from sensing to production is not a two-step process, nor is it an accidental process. Some think of innovation as nothing more than eureka!moments. While there is an element of that, innovation is more about programmatic disciplined effort, carried out over time, than it is about inspiration.

Finally, in your exponential journey you may encounter a common innovation challenge: The investment you will be making often yields less—at least initially—than the day-to-day approaches you have in place. This is part of the process. To keep things in perspective and to help everyone stay focused on end goals, you will likely need a methodical program that guides and accounts for the time and money you are spending. Without such a blueprint, innovation efforts often quickly become unsustainable.


Bottom line

Though the promise that nanotechnologies, energy systems, biotechnology, and quantum technologies hold for business is not yet fully defined, some if not all of these exponentials will likely create industry disruption in the next 24 to 60 months. As with other emerging technologies, there can be competitive opportunities for early adoption. CIO, CTOs, and other executives can and should begin exploring exponentials’ possibilities today.

see also