Six things we might need for pervasive computing
There is no doubt that digital technology will permeate even more in the coming decades than it is today. Organizations like the Exponential Group argue that digitization should be the first step towards sustainability, and believe that hardware and software could help reduce emissions by 15% by 2030 and beyond, by helping in fine-tuning of buildings, factories and other environments.
Cars – already packed with processors – are turning into data centers on wheels with the growth of electric vehicles, better ADAS systems, and autonomous driving. Health care and telemedicine through new wearables or medical devices is often cited as the greatest opportunity for electronics technologies.
Nonetheless, creating an effective and economical ecosystem for pervasive computing systems requires extensive trial and error. As a futurist in Arm’s R&D organization, it’s my job to look ahead and spot the potholes. Taking some of the futuristic scenarios as guidelines, here are some of the fascinating obstacles I think we must overcome. (And of course, I would appreciate your feedback and suggestions in the comments section.)
1. Smart tattoos. Among other things, Neuralink has made ambitious efforts to harness brain and nerve impulses to enable people to connect to computers. The vision is compelling – imagine how the world would change for people with limb loss or debilitating diseases – but full of grave concerns.
The implantation of processors directly in the brain or on synapses carries considerable risks for medicine and patients. Imagine the complexity of some upgrades. At the other end of the spectrum, computer vision systems that analyze eye movements or vocal sound are inherently limited: they can only use a limited set of external data.
Smart tattoos would be able to transfer data to the cloud for analysis or execute AI directly. They could also act as a gateway for data entering the brain.
Data integrity and system security will be security critical. Techniques to prevent DDoS attacks are required together with an automated pause button to check incoming and outgoing data or to protect yourself from irrational impulses. In addition, AI algorithms and a human-machine interface would need to be developed to determine the true intent: one could imagine three quick blinks or some other simple body movement that would normally not be confused with a twitch at the next mouse click.
It should also be easily removable. Arm recently released a prototype of a flexible processor and a printed and flexible neural network. While it is still in the experimental stage, the ecosystem for technology components, manufacturing tools, and software is likely to begin to solidify in the years to come.
2. A digital product chain. As Hany Farid of UC Berkeley pointed out, deep fakes of videos, pictures, or even someone’s voice are becoming more and more difficult to spot, piercing, and insidious. Elections could be won or lost in the future with a few selected forgeries to raise doubts.
Now imagine the potential for mischief in the metaverse. AI-assisted video calls could be turned into fully fabricated and completely persuasive conversations and used to change your normal behavior or decision-making process. In the physical world, industrial plants could send messages that result in employees mistakenly shutting down production or, worse, unable to take action against catastrophic failures.
With autonomous systems (smart plants or cars), in the event of an accident, every element of the AI-based decision-making process at all levels should be able to tell you what happened and why a decision was made (e.g. a wrong signal, LIDAR has something not recognized, etc.). Lt. Gen. Vincent Stewart describes computer counterfeiting as fifth generation warfare that works by depriving someone of the ability to make rational decisions.
In both the physical and digital worlds, we must be able to invariably trace information back to a single point of truth. A data attestation service and blockchain-like capability that is easily scalable to trace information back to the original bit is required. Such a system cannot prove that something is true, but it could detect tampering in the meantime and ensure data integrity.
3. Dissolving ICs. In 2020, humanity celebrated another dubious milestone: for the first time, the mass of man-made materials is likely to exceed the amount of natural biomass, and it will double every 20 years. Cradle-to-cradle manufacturing, where manufacturers can reuse old materials or parts, can help reduce landfills.
But how do you develop, implement and recycle intelligent sensors / systems in a sustainable and yet scalable manner? Dissolving ICs would provide a viable way for manufacturers to reclaim components. With programmable dissolvables, they could even adapt the product functionality and aesthetics, thus enabling scalable yet sustainable mass customization.
4. Data centers in the sky. Although the power consumption of data centers and networks has remained remarkably relatively constant over the past decade, innovation is required to maintain the track record. Digital data continues to double every two years, and AI and 5G will add to the workload.
Fortunately, many popular applications do not require hyperlatency. It is conceivable that cold (and lukewarm) data storage and moderate computing loads will be shifted to nanosatellites. Although complex calculations of total energy consumption would be required, orbital data centers would have a systemic advantage over their terrestrial counterpart: cooling would be free. While it’s a mega-engineering problem, it’s also one that has a lot of basic knowledge already in place.
5. AI generated hardware. As all industries are moving through their digitization, the electronics industry will have to follow suit. The design of IoT systems has already benefited from application enablement platforms, which enable faster and easier design of IoT devices and related applications. Around 10-20 of these platforms will reach an appropriate level of maturity in seven years. Similar enablement technologies and tools could be developed for almost any segment of the electronics industry where hardware must be co-developed with software and applications.
For semiconductors, the simplification and automation of the design of complex intelligent systems through some kind of platform to support the semiconductor design could come sooner than we think and simplify the development life. As soon as huge databases of design data are available in the cloud, the layering of an AI that can generate hardware and software better and faster than humans is only one step away.
Neural architecture search ML tools could automate the creation of hardware-aware trained neural networks that are optimized for a specific ML task. If we can represent hardware components and their relationship in terms of cost functions for a particular process, we should be able to automatically optimize and synthesize hardware for tasks.
6. Phase change memory for things. In 1970, Gordon Moore in Electronics Magazine predicted that phase change memories could hit the market within a decade. It did not happen. Conventional memory and storage proved more flexible and expandable than even their most staunch proponents believed, and a then-newer concept – flash – fitted well into traditional semiconductor cosmology. Phase change devices, meanwhile, have proven difficult to move from prototype to production. (Moore shouldn’t be feeling too bad: Another author wrote an article in the same issue called “The Big Gamble in Home Video Recorders.”)
However, a ubiquitous Internet of Things changes the equation. Desks, windows, doors, but also a range of long-lived or short-lived goods will soon be expanded to include new functionalities thanks to hybrid electronic systems and become pervasive HMIs or sensors. However, they are not permanently connected or plugged in. They also do not contain traditional computer interfaces or batteries. Instead, they are enveloped by an intelligent second skin that reacts to RF waves, heat or other stimuli. There is definitely a need for a non-volatile memory with low power consumption. Could the phase change provide the required capacity, non-volatility and power consumption profile?
The goal of a futurist is not to predict the future, but rather to provide a glimpse into the possibilities of the future and how they can emerge from the present. I share a point of view here. Which one is yours?