It’s natural to assume that the IT revolution will continue forward at a cracking pace, but what if there are limits on how much energy humanity can actually put towards it? That’s the focus of Professor Michael Fuhrer’s research, who is speaking at the Materialise conference this week in Wellington.
The theory is called Moore’s Law. At a simple level, it says the number of transistors in a dense integrated circuit should double every two years. What that means in practice is that computing and information technology can get smaller, faster, crunch more numbers. The speed of technological development in this sense has had dramatic consequences for how technologically advanced societies have developed in recent years, and enabled the widespread digitisation of infrastructure, commerce, government, our social lives and entertainment. It is no exaggeration now to say that we live in a digital world, and that it has been a revolution that has taken place in the space of just a few decades.
But is continuing to march further into the digital future sustainable? A desktop computer running at peak capacity might draw around 175 watts of energy, compared to, for example, a compact fluorescent lightbulb, which uses about 15. Now get an entire office building full of those desktop computers. Now multiply that by 100 for every building in the city. Now add the energy use to charge up the smartphones sitting in every one of the office workers’ pockets. Now add in the energy use of the servers that store all the data sitting on both sets of devices. Now add in all the energy used to build the devices in the first place. Now add in the electricity needed to power the various wifi networks. All of a sudden, and using only the most basic examples, the amount of energy use that has powered the digital revolution starts to add up dramatically.
Or consider the more hot-button example of Bitcoin, which is considered by some to be the future of currency. But the energy use that powers the generation of more Bitcoins is astonishing – by one measurement, the entire network uses about as much energy as the entire country of Ireland. The process known as mining bitcoin heavily incentivises users to upgrade their computing power, to accelerate how quickly they can build up their stock of Bitcoins. As of 2017, the Bitcoin network emitted the equivalent of 17.7 million tonnes of carbon dioxide every year, and that was predicted to keep rising quickly. For a world grappling with the need to take action on climate change, it is a microcosm of how the increasing digitisation of all aspects of life could work against that goal. Around the world, about 8% of all energy use is being put into computing. That means it could become a finite resource.
It’s these sorts of questions that Monash University Professor Michael Fuhrer spends his time working on, as the director of an Australian initiative called FLEET – or to give it the full title, the Australian Research Council Centre of Excellence for Future Low-Energy Electronics Technologies (you can see why they call it FLEET). He’s an American physicist who has conducted research into ways different materials could be used to continue increasing computing capacity, and will be speaking in Wellington at the Materialise: A Sustainable Future conference this week. (There’s also a group of New Zealand scientists have found a way to take the heat off data centres). Professor Fuhrer spoke to Alex Braae about the challenges the world faces in this area, and what solutions might look like.
When you look at energy constraints and the types of materials we use for computing technology, do you think it’s possible for Moore’s Law to continue?
We want more computing every year, we want to do more things with our computers, but there’s an ultimate limit to that, and that’s set by how efficient computers are. And we’re reaching the limit of that, a point in which we can no longer make computers any more energy efficient with current technology. Moore’s Law has been fantastic, over the last four decades or so we could predict computers would get better, faster and more energy efficient, but that’s running out right about now. It’s hard to say, but I think in 10 years we’ll look back on about 2018 as the year it ended.
And it’s not necessarily just about what a device itself can do now right?
A lot of that computation is going on in the cloud these days, so that means a big data farm somewhere. And those are plugged into the electricity grid, and they use a lot of energy, and if we want to do more of them to process videos, or whatever we want to do, that will take even more electricity.
Do the physical materials we use for computing technology come into it at all?
The fundamental is that you make computers out of transistors, and you make transistors out of silicon. It’s a semi-conductor, and we’ve reached the limits of silicon. It’s an amazing technology, but we’ve reached the limits, and we need some other material out of which to make transistors. Not only that, they’re really going to have to be fundamentally different. So it’s not just that we need to go to another semi-conductor like silicon, the industry is doing that, they’ve tried germanium, but it’s an evolutionary thing. It’s just swapping in one semi-conductor for another.
And that will lead to diminishing returns?
Well we want to do something radically different. What we’re doing is instead of a semi-conductor, our idea is to use a new kind of material called a topological insulator.
A, um, what?
This is a tough one to explain to a layman. They were the topic of the 2016 Nobel Prize in physics, so it’s recognised as an important advance. The easy way to explain it – an insulator is something that doesn’t conduct electricity. A topological insulator doesn’t conduct electricity on its interior, it conducts on its edges or its surfaces, it’s a 3D chunk of material that conducts on every surface. If you cut it in half, you get new surfaces and they can also conduct. And if it’s very thin, what we call two dimensional, then it just conducts along edges. And so it doesn’t conduct in the middle, but basically along lines.
You’re speaking at a conference on sustainability – is the kind of digitised world we live in now compatible with sustainable outcomes in your view?
Digitising things helps sustainability. The IT revolution has made it possible to, for instance, meet virtually on the other side of the planet rather than flying to them. So there’s lots there that makes our lives more sustainable, and we want that to continue. We want self-driving cars, and other things that make for efficiencies in society. But there are some energy costs in the computing itself, and there’s a lot of room for improvement. And we know silicon won’t get us there, so we need some kind of new kind of computing technology.
How much do our own priorities as a society have to come into these debates? Like, you’ll presumably be aware of the energy usage statistics for something like Bitcoin right?
In some sense that’s what we’re trying to avoid. If we don’t do anything, then we’ll reach a situation where computing energy will be limited, it’ll be a finite resource. And then we’ll have to figure out what’s most valuable and most sustainable. We’re working on trying to avoid that, and make it so that computation stays cheap and energy efficient, and we don’t have to worry about rationing it. But it’s a difficult problem, the solution isn’t known, so it’s a real back to the laboratory sort of thing, and it’s not obvious that we’ll succeed. Physics principles say it’s possible, but not everything that’s possible is easy.
This interview has been edited for length.
This content was created in partnership with the MacDiarmid Institute. “Materialise: a sustainable future” – is a one day discussion about future science for an environmentally and and economically sustainable New Zealand.
The Spinoff Daily gets you all the days' best reading in one handy package, fresh to your inbox Monday-Friday at 5pm.