Let’s take a peek into the future.

Automation, Education, and AI’s Effect on Society
AI is going to change the types of jobs that are out there. It probably won’t change [them] as drastically as people think, because in a lot of these situations, the need for human intuition, empathy, and decision-making is still going to be needed. We need to be able to prepare the workforce for that next generation of jobs.

I think there’s an opportunity to rethink how we train people. The average person changes careers multiple times and learns things on the job that didn’t exist [when they were] in college. This concept of continuous learning needs to be baked into how people work, so they can evolve as technology evolves. If you think about a classroom—a teacher standing in front of 30 students or a university professor standing in front of 300—we need to rethink how to redo education both in terms of personalizing it to the individual while making this kind of continuous education possible on a daily basis. AI is going to be a key to that. AI can make it so that education is built into everything that we do on the job, at home. It can scale in a way that the education system right now can’t.
One thing Google did recently is publish our AI Principles. It’s almost like our constitution for how we believe AI should be used. [Often] a lot of uses of AI are in a gray area. A lot more work needs to happen, but we’ve turned [the guidelines] into an operating principle. We’re learning as we build this technology, so it’s important to set ethical AI principles at the outset; to think about these situations are they come up and adjust. As we’ve examined it, there are very few things that are just unquestionably good or unquestionably bad.
~ Rajen Sheth
IoT Privacy and Security
We have data and sensors everywhere, controlling your building, controlling public utilities like bridges. And now we’re getting into the realm where we’ll have IoT sensors with machine learning. We need to address the impact of all of this on security. The whole security model is completely broken in terms of what you think an IoT device can do and how it’s different than a traditional information technology device like a laptop or a phone.

IoT is becoming even more involuntary. Whether or not I like it, I walk past something, and it may be sensing my movement at that point, it may be sensing where I’ve been, it may be sensing how much I walk and what level of activity I do. As a society, we have to deal with the implications of this, whether it’s IoT in your home or IoT in a public space. We see some of that with Chinese companies doing facial recognition through millions of cameras.
So I think there’s a crossroads here. A lot of these use cases are so data-driven, and they are so computationally intensive that by definition, the vendors want you to install that device that ships all the data to the cloud, because it allows them to improve the machine learning and the models. I am worried about what it means for privacy, especially as we increase the amount of sensing. In a dumb home, if I may call it that, you don’t have to worry about this data leakage and privacy concerns. There was a gap [between] what you did [and] the outside world. Now, that’s going away, and very much like what Facebook did for what we make available in public, that’s what’s going to happen even for IoT. All of this information will be there. And somehow, we have to rein this in and have regulation or rules or transparency into how that data is being used. I think that is one of the defining challenges as IoT and smart homes become much more pervasive. And then, how do we make sure that this doesn’t become a completely disaster in the long run?
~ Yuvraj Agarwal
Perceptive Tech
I also believe that in the future, all technologies, big and small, will be perceptive and sentient. The ubiquitous human-machine interface, in whatever shape it will manifest, will interact with you the same way that humans interact with each other. Ubiquitous, perceptive AI will be always there, but in a subtle sense, operating in the background to make our experiences better and our lives easier. These systems and technologies will be seamlessly integrated into our day-to-day lives, the devices we use, the cars we drive, the homes we live in, and the like, having a small footprint.
Emotion AI will provide a backbone for our digital experiences. In the coming years, I believe we’ll see that it will evolve beyond just emotion, and transform into “human-perception AI” or, as I like to say, AI that can understand all things human.
If you think about it, we’re already surrounded by AI today, with the ubiquity of technologies such as Siri, Alexa, and Google Assistant that are constantly engaging with us and learning about us. At the same time, science has shown that there are facial and vocal biomarkers of mental health. Now, imagine if these devices were equipped with human-perception AI that could detect these facial and vocal indicators of poor mental health. There’s significant potential for this technology to transform mental-health treatment and care, by serving as a way to measure people’s wellbeing and even providing real-time intervention.

AI ‘Social Contracts’
I don’t subscribe to the doomsday scenario of AI taking over humanity. But I also don’t think that the future world will be one where humans completely dominate and direct AI. Instead, I believe we’ll work in partnership. As with any successful partnership, we’ll need rules and guidelines to govern how we work together. That’s where the social contract comes into play.
I believe that mutual trust and understanding are central to this new social contract between people and AI. There’s a lot of talk about people needing to be able to trust AI, but I’d advocate that AI needs to be able to trust people too—to perform the right role in workplace settings, to operate vehicles or machines safely, and ultimately to use AI ethically and morally. Human-perception AI will be key in enabling that trust and understanding, so that people and AI can ultimately form the kind of relationships that humans have with one another, that make partnerships productive and mutually beneficial.
~ Rana el Kaliouby
Other expected technologies
There are a few [technologies] that should be on everyone’s radar. Biology is one of the most important technology platforms of the 21st century. Genome editing will influence the future of life on our planet, and what’s both promising and concerning is that changes made can be heritable. If you’re trying to eradicate malaria without also wiping out the entire mosquito population, deleting the part of the bug that’s capable of carrying the disease—such that the newly edited sequence is passed down to future generations—is a good thing.
However, what are the implications of making choices about heritable characteristics in humans? This isn’t the same thing as simply speeding up what would otherwise be a Darwinian process. The US doesn’t have a national biology strategy, and there are no codified norms and standards that everyone agrees to worldwide. So while gene editing could theoretically eradicate certain diseases, such as HIV, from the human population, we don’t yet know the further-reaching implications if the same technique is used to enhance certain cognitive abilities.
For example, scientists in California are working on a technique that’s sort of like a biological DVR, which records cells as they age. If we can quantify aging at a cellular level, it’s plausible we could reverse it. This seems like the type of technology that would become commercialized, which would mean that we’d have a new stratification of humans: engineered people, who stay youthful for as long as they’d like, and non-engineered humans who must suffer through the aging process. And that has ramifications for all of our futures, because people who know they’re going to live 150-plus years would likely make different decisions than people who have normal (by today’s standards) life spans. Imagine a member of Congress who serves 75 years: That would be a nightmare.

Smart Interfaces
On the hardware side of things, spatial computing environments and smart glasses will dramatically transform our communications ecosystem over the next two decades. In spatial computing environments, machines occupy space around us and are responsive to us in real time. They use sensors, 3D capture, rendering, wearable displays, and computational algorithms. This means you’ll bring your own data to a space and also generate new data in relation to it. Rather than a two-dimensional overlayed screen of information, you might be sitting across from a fully rendered AI agent who tricks you into believing she’s human. In fact, a prototype already exists; I’ve seen it, and it’s remarkable.
What all this points to is that soon, we’ll start to transition to the next era of computers; a post-screen era where humans are intertwined with computing environments rather than carrying them around in our pockets. Smart glasses will begin to replace smartphones, and the transition from smartphones to smart wearables and invisible interfaces—earbuds that have biometric sensors and speakers, rings and bracelets that sense motion, smart glasses that record and display information—will forever change how we experience the physical world.
Transportation
The ecosystem I’m hoping for is egalitarian and smart: smaller pods that are capable of safely transporting people, pets, and objects where they need to go without massive delays. Autonomy can end the public-transit problem that exists in many cities, where hard-working people must spend two hours … just to get home each night. Autonomous vehicles would operate on a network of interconnected roads, bridges, and underground tunnels that are continuously maintained. And—since I’m a bit of a car nut—this idealized future would still allow me to drive an old-school supercar, like a Ferrari 458, around a performance track.
What I think is more likely to unfold is it will take longer to reach full autonomy in cars and trucks than everyone is expecting, and that’s because in the US, our government hasn’t engaged in long-term planning and strategy. So there are numerous dependencies still left to be developed, like insurance rates, how to safely transition our current infrastructure, car ownership models, and the like. We’ll see car companies competing for market share rather than collaborating. Elon Musk’s tunnel project will continue to be scrutinized. Meanwhile, China will move ahead with its various maglev high-speed train projects. It’ll be a scattershot of options for many years.

Artificial Intelligence
I am gravely concerned about what I call the Big Nine tech giants who are effectively in charge of AI’s destiny. Those companies are Google, Amazon, IBM, Microsoft, Apple, and Facebook, and China’s Baidu, Alibaba, and Tencent. Humanity is facing an existential crisis in a very literal sense, because no one is addressing a simple question that has been fundamental to AI since its very inception: What happens to society when we transfer power to a system built by a small group of people that is designed to make decisions for everyone? What happens when those decisions are biased toward market forces or an ambitious political party?
The answer is reflected in the future opportunities we have, the ways in which we are denied access, the social conventions within our societies, the rules by which our economies operate, and even the way we relate to other people. In the US, relentless market demands and unrealistic expectations for new products and services have made long-term planning impossible. Our government has no grand strategy for AI nor for our longer-term futures living with AI. Instead of funding basic research into AI, the federal government has effectively outsourced R&D to the commercial sector and the whims of Wall Street. In China, AI’s developmental track is tethered to the grand ambitions of government. AI is part of a series of national edicts and laws that aim to control all information generated within China and to monitor the data of its residents as well as the citizens of its various strategic partners.
~ Amy Webb
We are building the year 2039 future right now, in the present. We ought to think more exponentially and agree to act incrementally. We each play a critical role in what’s developing on the horizon. That means you, dear reader.