The Graphene Revolution

Home > Other > The Graphene Revolution > Page 5
The Graphene Revolution Page 5

by Brian Clegg


  The 26-year-old Bohr had been given a grant by the Carlsberg ‡ Foundation to spend a year studying in England. He had hoped to work with J.J. Thomson, the discoverer of the electron and the man behind the plum pudding model. But when Bohr turned up in Cambridge, equipped with an English copy of The Pickwick Papers and a Danish–English dictionary in an attempt to improve his English vocabulary, he rapidly found that Thomson had little interest in his work. This might not have been helped when Bohr, on their first meeting, took the chance to point out to Thomson some errors in the older man’s recently published book.

  After a few uncomfortable months at Cambridge, Bohr managed to get a transfer to Manchester, where he found the jovial, loud figure of Ernest Rutherford a much more amenable and effective mentor. Bohr himself was a quiet introvert, who struggled to put his thoughts into words, but he had a huge admiration for Rutherford and the way he worked so openly with the young physicists in his team. When Bohr had his own team, he very much based the way that he worked with them on Rutherford’s example.

  Bohr was set to work with alpha particles, at the peak of their interest in the Rutherford lab, but quickly found a greater enthusiasm for exploring the structure of the atom beyond the newly discovered nucleus. Strangely, Rutherford himself was not particularly concerned with the topic. He was more interested in the mechanism of scattering incoming particles by the atomic nucleus than exactly what was going on in the detailed structure of the atom. However, Bohr picked up on the work of Charles Galton Darwin (the grandson of the better-known Charles Darwin), who had suggested that alpha particles that passed near an atom without bouncing off the nucleus were being slowed down by interacting with the negatively charged electrons around it.

  Bohr started to think about how the electrons around the atom managed to stay tied to the nucleus without plummeting into it. As we have already seen (page 32 ), it was not possible for them to be orbiting like satellites around a planet. But perhaps he could find some other way that they could stay in place but remain stable. He wrote to his brother Harald: § ‘Perhaps I have found out a little about the structure of atoms. Don’t talk about it to anyone … it has grown out of a little information I got from the absorption of alpha rays.’

  Bohr knew that there was no stable way that electrons could either be arrayed stationary around the atomic nucleus or in conventional orbits. He had to come up with a more radical solution. Using the quantum idea that had been started by Max Planck and built on by Einstein (before he turned against it), Bohr suggested that electrons could only inhabit particular orbits. Instead of moving incrementally from one orbit to another, as a spaceship would do, he believed that it was impossible for electrons to exist in between the orbits, so they made an instant jump from one to the next – a so-called quantum leap. ¶ This way, the orbits themselves would be quantised.

  The available orbits were linked to the energy of the electron. The approach made sense particularly if the electron was thought of as a wave. Where Planck and Einstein had shown that light, usually thought of as a wave, could behave as if it were a collection of particles, so electrons, usually thought of as particles, could also behave like waves. The energy of a quantum particle corresponds to the frequency of the wave. The higher the frequency (or the shorter the wavelength), the more energy the equivalent particle has.

  When an electron made a quantum leap to the next-highest orbit, the amount of energy it gained was the equivalent of a wave getting an increase in frequency. But if the electron did act like a wave, it would have to pass around the atom in a full number of wavelengths – the waves had to match up when they met themselves having passed around the atom – and this meant that only certain wavelengths, hence specific energies of the electron, were allowed. If the electrons around an atom behaved as waves, the orbits had to be quantised.

  What clinched it for Bohr was accidentally discovering the earlier work of a Swiss physicist, Jakob Balmer, who had produced a formula that predicted the spectral lines of hydrogen. When an element is heated, it does not give off every colour of light, but rather produces a set of separate, specific colours (frequencies). Balmer’s equation matched Bohr’s idea for how the electron orbiting a hydrogen atom could be allowed to jump from orbit to orbit. The gap between two orbits would be a fixed amount of energy; and the colour of the light given off when an electron jumped down across that gap – which corresponded to the energy of the photon produced – matched the spectral frequencies that Balmer’s theory predicted. It could surely not be a coincidence.

  Although we have been calling these possible levels the electron could occupy ‘orbits’, it wasn’t really an appropriate term to use, as the electrons were restricted to specific options. It was more as if they were running on rails that surrounded the nucleus, rather than behaving like an orbiting satellite. Bohr called these allowed energy levels ‘stationary states’, with the lowest possible energy called the ground state.

  Bohr’s model only worked for hydrogen. It would take the full-scale quantum theory that was developed a decade later to get a better picture that applied to all the elements. In essence, though, Bohr’s stationary states were the shells around the atom that electrons can occupy. Within these shells, given the Schrödinger equation’s prediction that an electron should exist as a cloud of probability rather than a classically orbiting body, the orbitals are the different possible probability distributions which begin with a simple spherical shell, but rapidly develop more complex lobed shapes as higher energy levels are reached.

  From orbitals to band gaps

  In a solid that is going to be used in an electronic device, the possible orbitals around the different atoms in the solid can and do interact as the orbitals overlap. This results in multiple possible orbitals within the material for each orbital that a single atom of that material could have. In fact, if there are 1,000 atoms in a lattice like that of graphene, each carbon atom has 1,000 different possible orbitals for each of its original ones. || In practice, there will usually be many billions of atoms, and so billions of tightly packed orbitals, so close together that they can be considered continuous bands, and are referred to as such.

  The different structures that the atoms can take mean that rather than having a continuous band featuring all possible values, there is often a pair of particularly significant bands with a gap between them. This is known as the band gap. If the outer electrons are within the bottom band, known as the ‘valence’ band, they are tied to the atom and tend to be involved in forming bonds. If they are in the upper band, the ‘conduction band’, their attachment to the atom is sufficiently weak that they can float through the substance and conduct electricity.

  In an insulator, all the outer electrons remain within the valence band and never have enough energy to cross the gap and get to the conduction band. A semiconductor, as used in electronics, still has a band gap, but it is small enough for a reasonable number of electrons to cross it. Conductors either have a very narrow band gap or none at all – and typically already have electrons in the conduction band. Graphene has a ‘zero band gap’ – the valence and conduction bands line up exactly with no gap or overlap. The valence band is well occupied, but there is nothing yet in the conduction band. Even so, this already makes graphene a good conductor.

  Of itself, though, we haven’t quite got enough quantum theory to explain why graphene has such extraordinary abilities as a conductor. For that we need to go beyond the Schrödinger equation to the Dirac equation.

  Dirac’s contribution

  Bristol-born Paul Dirac is probably the least well-known name among the greats of quantum theory. Most have at least heard the names Heisenberg or Schrödinger, but Dirac’s name would draw a blank – even though the contribution he made to quantum physics was just as important. In part this is probably because unlike, for example, the outgoing Einstein, Dirac was pathologically shy and given to making remarks that did anything but put other people at ease. Famously, Dirac was once giving a lecture
while visiting Wisconsin. After rattling through his material at high speed, he asked for questions. An audience member said to Dirac: ‘I don’t understand the equation in the top right-hand corner of the blackboard.’ Dirac simply stared ahead without replying as if he had not heard the question. After an uncomfortable silence, Dirac was asked if he had an answer and retorted: ‘That was not a question, it was a comment.’ Some wits might have come up with this response with intended humour, but Dirac was deadly serious.

  Although, like many scientists, Dirac did make such occasional forays out into the world to visit other institutions, and enjoyed a walk out into the countryside every Sunday to clear his mind, he was most comfortable in his Cambridge study, where his laboratory equipment (like Einstein’s) was a pencil and sheets of paper. One of his early contributions to quantum physics was to show that the approach taken in Schrödinger’s equation was entirely compatible with earlier work that Heisenberg had done. Heisenberg’s version of quantum theory, matrix mechanics, worked purely by manipulating arrays of numbers, without any real-world model to suggest what was involved. Some loved its mathematical purity, but others felt it was difficult to understand with its lack of connection to anything that could be envisaged. Dirac bridged the two. However, his biggest success would be in taking Schrödinger’s equation to the next level.

  Schrödinger’s elegant piece of mathematics comes in two forms: the more complex time-dependent Schrödinger equation, which is the one that shows the probability of a particle’s location spreading out over time; and the time-independent equation, which describes the behaviour of a quantum particle that is in a ‘stationary state’, such as an atomic orbital. These equations are very effective at describing what happens to quantum particles, but they have one severe limitation. They are known as ‘classical’ equations, meaning that they assume Newton’s laws of motion, rather than the more sophisticated variation that has to be introduced when using Einstein’s special theory of relativity.

  If a particle is moving slowly, this isn’t a problem. Special relativity only makes a significant difference when something is travelling at a reasonably large fraction of the speed of light. However, there are circumstances when quantum particles do move extremely quickly – including in the case of electrons in an atom or the charge carriers in graphene. Dirac felt it should be possible to combine the type of information that comes out of Schrödinger’s equation with the impact that the special theory of relativity has on motion.

  After a frantic period of work leading up to Christmas 1927, Dirac came up with his own equation for the behaviour of the electron. The equation was in four parts, which not only incorporated the special theory of relativity, while collapsing to the Schrödinger form at low speeds, but also handled another aspect of the behaviour of quantum particles called spin, ** which had yet to be properly dealt with by existing mathematics.

  When his work was published by the Royal Society in February 1928, Dirac’s conclusions proved a significant shock to the physics world. It wasn’t so much the mathematics he used, though the work was anything but simple in that respect, but rather the implications of his equation. It would not work unless electrons could have either positive or negative energy – and the very concept of a negative amount of energy was a baffling one. Worse still, the implication was that an electron could not just take quantum leaps down to a lowest positive level, the ground state, but would continue to jump down below zero with an infinite possibility of extra leaps. Yet they clearly didn’t do this.

  The Dirac sea

  For a while it seemed likely that the possible negative energy solutions to the equation would simply be ignored. There was some precedent for this. When James Clerk Maxwell had come up with his equations for electromagnetism, which had shown that light was an interaction between electricity and magnetism, the equation describing an electromagnetic wave had two distinct solutions. One was for the wave that was well known in nature, travelling from transmitter to receiver at the speed of light. But there was also a solution where a form of light wave left the receiver at the time of arrival of the normal wave and travelled backwards in time to arrive at the transmitter at the time the normal wave departed.

  Both these solutions to Maxwell’s equations were equally valid – and the backward travelling wave would eventually be useful. †† However, it was usually the case that only the wave that travelled forward in time would be used and the other was ignored – swept under the carpet. After all, the forward-travelling solution perfectly matched what was observed, so why make things overly complicated? Similarly, with Dirac’s equation, the positive energy solution did a wonderful job of matching observation, and many were happy to ignore the negative energy solution. But not Dirac himself.

  Dirac would spend a good year battling the negative energy problem. He did not manage to remove it entirely, but rather came up with a scenario that meant it could exist while still usually being ignored. However, this scenario took a bit of getting used to. He imagined that every single negative energy level that an electron could occupy was already full up of electrons. This meant that the universe had a kind of infinite sea of negative energy electron positions, each occupied by an electron. Then the ‘real’ electrons that we observe would have to have positive energy, because all the negative levels were already filled, and a law of physics called the Pauli exclusion principle required that no two electrons could have exactly the same properties, including their energy level. ‡‡

  Although this scenario seemed more than a little unlikely, it did make a prediction – and one that could be tested – that made it different from a situation where negative energy just didn’t exist. Inevitably, sometimes one or more of the negative energy electrons would be hit by a photon and would jump up in energy to a positive level, just as electrons jump between the normal positive energy levels. This would leave holes in the negative energy sea where the negative energy electrons had been. When that happened, ordinary, positive energy electrons could drop down into the holes, disappearing from the normal positive energy world while giving off photons. So, experimenters could look out for these negative energy electron holes, or rather their impact. Having a hole – in effect, an absent, negatively charged, negative energy electron – turned out to be identical to having a present, positively charged, positive energy electron. So, Dirac’s theory predicted that there would be a particle that was exactly like an electron, but with a positive charge.

  If such a positively charged particle were found and it met up with a normal electron, it would be like a normal electron dropping into the hole in the sea. Both the positively charged particle and the electron would disappear, giving off electromagnetic energy in the form of a pair of photons. The positive particle, soon to be called a positron, or an anti-electron, would be discovered in 1931 by Carl Anderson, an American PhD student, in cosmic ray showers, when high-energy particles from space crash into the Earth’s atmosphere. Ironically, when a lecture was given at Cambridge about the discovery of positrons, Dirac happened to be out of the country and didn’t hear about it until some time later.

  Alternative ways to approach Dirac’s equation were later found, without the requirement for the infinite negative energy sea model, but the basic outcomes have stood the test of time. And because the electrical charge-carrying particles in graphene are travelling extremely quickly, they can only be effectively described using Dirac’s equation rather than Schrödinger’s. And as we shall later discover, this gives graphene remarkable electrical properties.

  Quantum theory gives us all we need to comprehend what is happening inside a piece of graphene – and the same principles are essential for us to be able to produce the microelectronics that are found inside every computer, phone and electronic device, enabling quantum physics-based products to represent around 35 per cent of GDP in developed countries. However, there’s one other logical requirement for us to understand the way that quantum physics enables us to make solid state circuitry and
how ultrathin substances such as graphene can be involved in these devices. We need an idea of what basic electronic devices do, and how these are put together to make a fundamental logical concept called a gate.

  Electronic components

  Almost all electronic mechanisms depend primarily on two relatively simple components, the diode and the transistor. There are various other components, such as capacitors and resistors, but most of the functional parts of a transistorised device are based on these related components. A diode is simply a one-way path. It is an electronic component which allows electrical current to flow in one direction, but not in the other. There are a number of ways of making a diode, but the simplest form is a sandwich of two different types of semiconductor (typically materials such as silicon or germanium). A semiconductor, as we have seen, has a band gap, but it is one that can be bridged, sometimes by a secondary electrical current, sometimes with another source of energy, such as light.

  One side of the simple diode is called a ‘p-type’ semiconductor. This has been ‘doped’ §§ with another material such as boron, which results in it having more gaps in its valence band than is normal in the semiconductor. These gaps, known as ‘holes’, rather like the holes in the Dirac sea (though these are positive energy holes) act as if they are positively charged particles.

  The other side of the diode is an ‘n-type’ semiconductor. This also has been doped, but with a different material, such as phosphorous. An n-type semiconductor has relatively few holes in its valence band, but a lot more free electrons in its conduction band. When the diode is connected into a circuit, if the n-type side is on the negative side of the circuit and the p-type side is on the positive side, electrons will flow through the diode, attracted by the positive holes in the p-type side. However, if the circuit is connected the other way, the excess electrons on the n-side repel any further electrons, so current won’t flow.

 

‹ Prev