Skip to main content

Technology and the Future

An interview with Haroon Sheikh about AI.

Haroon Sheikh is professor of philosophy at Vrije Universiteit Amsterdam, where he teaches on globalization, the philosophy of technology, and East-West geopolitics. His books address a diverse array of subjects, including relations between Europe and Asia and the politics of water. Alongside his academic career, Haroon advises the Dutch government as part of the Scientific Council for Government Policy (WRR) and has contributed to the council’s latest report on the impact of artificial intelligence, Mission AI.

"Artificial intelligence (AI) isn’t just poised to change the world; it’s already changing the world. Across a wide range of sectors, powerful AI technologies are finally leaving the lab, proving their commercial viability, and transforming entire industries. For some, this change is a source of alarm, but I want to add some nuance to the conversation about AI and provide a balanced, historically informed perspective of how AI technologies are likely to develop in the coming years.

At the outset, it’s important to draw a distinction between artificial general intelligence (AGI) and narrow AI. The former is the attempt to recreate the human mind, which has been a dream of computer scientists since the earliest days of the AI discipline. Progress on this front remains slow, mostly because our understanding of our own consciousness is still so limited. It’s incredibly difficult to recreate something that you don’t fully understand in the first place, particularly a system as complex as the human mind! Given the scale of the challenge, I don’t expect generalized intelligence to emerge any time soon – but AGI is only one part of the bigger picture. There’s also the larger and more promising field of narrow AI – that is, an AI that can perform a particular function better than a human. This arena is where the most exciting developments are currently taking place.

Haroon Sheikh

ChatGPT
Within narrow AI, we’re seeing the emergence of tools that don’t just interpret patterns, but create new ones – a huge step forward. One such development is ChatGPT, an AI-powered chatbot developed by Open AI that can provide lengthy, detailed, and considered written responses to user prompts. The same goes for DALL-E, Open AI’s image generation tool: it creates works of art based on a written prompt, and the results are truly outstanding. Tools like these are set to become mainstream within the next few years.

AI also has huge potential when it comes to federated learning. Perhaps most interestingly, I think this technology addresses many of the privacy concerns that surface in the AI debate. In federated learning, instead of feeding large quantities of data to a central algorithm, the algorithm migrates to different data sources. For instance, the AI could visit the server of a particular hospital, adjust its parameters based on what it learns from the data, and then return without taking any data with it. As such, I think federated learning will contribute to the wider adoption of AI technologies in sectors – like the healthcare industry – that are more skeptical toward data-driven approaches.

Deepfake
The story isn’t uniformly positive, though. Other developments in narrow AI are more problematic, such as the rise of generative adversarial networks – the technology behind deepfakes. The newest deepfake tools allow users to create fictitious videos with remarkable accuracy, and this has serious political consequences. We already live in an era of fake news and filter bubbles (online spaces where people only see content they agree with), and in a world of deepfake content, the line between facts and “alternative facts” will become even more blurred.

However, I don’t think this is inevitable. With the proper regulatory approach, governments can find a path that restores public trust in AI and addresses its potential downsides. A good example of this is algorithm registration, which is already required in cities including Amsterdam and Rotterdam. Under this system, government organizations must publish all the algorithms they use. Let’s say a citizen has received a fine from the municipality: with algorithm registration, the citizen will be able to see whether the decision to fine them was made by a human or an algorithm. If it was an algorithm, they’ll be able to see which rule was being applied. I think public confidence in AI will improve substantially once citizens are no longer in the dark about what rules are being applied and how, so I hope this system is introduced more widely in the coming years.

I’m also encouraged by the fact that EU policymakers are reaching a consensus about AI. After seemingly endless declarations, concepts, and roadmaps, the EU AI Act is set to become binding law within the next few years. This first-of-its-kind legislation will ban applications and systems that create an unacceptable risk, place specific legal requirements on high-risk applications, and allow low-risk technologies to operate with minimal regulation. Meanwhile, I’m also seeing greater convergence in the AI debate at the practical level, particularly when it comes to the standardization of technology. These developments don’t address every possible concern, but they are steps in the right direction.

Parallels
When considering our regulatory approach, it’s useful to draw parallels with the rise of the automobile in the early 20th century. After all, AI’s current position is similar to that of the car in the 1920s: a technology that’s been proved to work, but which still presents legitimate safety concerns. Well, throughout the 20th century, regulators responded to these dangers with bumpers, seatbelts, stop lights, and other traffic regulations. Even today, cars are not entirely safe – but a constant process of regulatory refinement has led to huge reductions in the number of traffic accidents, and the overall utility of the technology is no longer in doubt. I think the same applies to AI: we have a powerful but largely unregulated technology hitting the market, and over the coming years, we’ll have to ensure it serves the public good.

Learning as you go is an important part of any technological revolution, and always has been. That’s why I’m not overly concerned by what’s going on in AI today – in fact, I’m excited. We’re currently on a long, fascinating path toward embedding this technology on a wide scale. History shows us these paths are never straightforward – they often require organizations to completely rethink their processes – but history also shows us that real innovation is possible. If we remain focused on regulating the narrow AI technologies that are already transforming our societies, and avoid sensationalist narratives about impending AGI, I think the future of artificial intelligence is very bright indeed."

Related articles