by Tom Standage
What does “digitally remastering” a film really mean?
In April 2017 Rialto Pictures and Studiocanal released The Graduate (1967) in a “new digital print” in honour of the film’s 50th anniversary. A 2016 version of Dr Strangelove (1964) boasts a “restored 4K digital transfer”. Citizen Kane (1941) “dazzles anew” in a “superb 75th-anniversary high-definition” digital restoration. Most film buffs understand these terms to be vaguely synonymous with improvement. But what does the process of “restoration” and “remastering” involve? And is it necessary, or just a ruse to sell old movies in new packaging?
Until the 1990s, movies were made exclusively with analogue cameras and photosensitive film. These produce an image as light streams through the lens and turns microscopic crystals into silver forms – which can then be developed into a permanent (and even colourful) picture, using chemicals in a darkroom. The resulting frame is highly detailed, but also susceptible to flaws. Temperature changes, dirt or rough handling can introduce stains or a grainy texture. Digital cinematography avoids these problems: an image-sensor chip converts the scene into millions of pixels, each of which is a miniature square with a numerically coded brightness and colour. Most modern movies are made and distributed this way. It allows directors to review takes immediately, lets editors enhance them on computers and enables studios to send them to cinemas without shipping hefty reels around the world. Some purists demur, because analogue film can still produce a higher resolution.
Viewers aren’t as picky, and almost all consume video digitally. Whether they are streaming Casablanca (1942) or watching The Godfather (1972) on Blu-ray (a more capacious format than DVDs), they are served a scan of the original 35mm film. The studio has converted each physical image into pixels. A full restoration and remastering of a film, however, goes a step further. The film roll is cleaned, to remove dust. Technicians then work frame by frame to restore the film, removing interference (such as noise, scratches and other signs of ageing), enhancing colours and sharpening outlines. Additional special effects and CGI may also be added. The audio will be overhauled at this stage, too, and perhaps remixed to introduce surround sound. The process is laborious, usually taking more than a year to complete.
Such painstaking adjustments are easy to miss without looking at a side-by-side comparison. Fans tend to focus instead on tweaks to the action, because some directors cannot resist tinkering with the story as well as the image. George Lucas, who pioneered the use of digital cameras in the Star Wars prequels at the beginning of the 21st century, upset fans by adding new scenes and editing dialogue in the original Star Wars trilogy when it was remastered in 1997. DVDs of Ridley Scott’s Blade Runner (1982) boast of a “futuristic vision perfected”, partly because of the improved special effects, but also thanks to a changed ending. There are other risks: though reels of film decay and are easy to lose, they can preserve a film for decades, whereas the longevity of digital media is less certain. And heavy-handed remastering risks losing some of the qualities that made these films so special in the first place.
How bookmakers deal with winning customers
888, an online betting firm, was fined a record £7.8m ($10.3m) in August 2017 after more than 7,000 vulnerable customers, who had disabled their betting accounts in an effort to prevent themselves from gambling, were still able to access their accounts. Yet away from the regulator’s gaze, bookies often stand accused of the opposite excess: being too prompt to shun winning customers. Successful bettors complain that their accounts get closed down for what are sometimes described as business decisions. Others say their wagers get capped overnight to minuscule amounts. The move may be unpopular with punters, but in most parts of the world it is legal.
Bookmakers say scrutinising winners is necessary to help prevent fraud. Competition in the gambling industry increased with the arrival of online betting, prompting bookmakers to offer odds on markets they did not previously cover. In some, such as eastern European football leagues, low wages and late payments make fertile ground for match-fixing. A winning streak at the windows can signal foul play. Most often, however, efforts to spot savvy customers are not rooted in a desire to thwart dodgy schemes. Rather, they are part of what industry insiders call “risk management”: to remain profitable, bookies seek to cap potential losses. As one betting consultant puts it, “Bookmakers close unprofitable accounts, just as insurance companies will not cover houses that are prone to flooding”. Betting outlets get to know their customers by gleaning information online, tracking web habits and checking whether punters visit odds-comparison sites. Profiling has also been made easier by the tightening of anti-money-laundering regulations, which require online punters to provide detailed information when opening accounts.
Bookmakers argue that such screening is needed to restrict their involvement with professional gamblers. That in turn allows them to offer better odds to ordinary punters. Critics retort that the net is being cast too widely. Bookies may spend considerable resources trying to spot those who bet for a living, many of whom hire quantitative analysts to estimate outcomes and develop hedging strategies (in some cases seeking to exploit discrepancies between odds offered by several bookmakers to make a guaranteed profit). Online bookmakers respond with sophisticated algorithms that flag customers betting odd amounts of money – £13.04, say – on the basis that ordinary punters usually wager round sums. They take a closer look at those who snub free bets or bonuses, which rarely fit professional bettors’ models and come with terms and conditions attached. They scrutinise user behaviour. While casual punters are more likely to bet minutes before an event begins, pros will often seek the best odds by laying their wager days in advance (because the longer one waits to bet, the more information becomes available about a particular event, and thus the easier it is for bookmakers to price it). And they look at customers’ tendencies to win, sometimes accepting bets at a loss if a punter, seemingly acting on inside knowledge, allows them to gain market intelligence.
This explains why professional gamblers rarely do business with high-street bookmakers. They often place their trades on betting exchanges like Betfair or Smarkets, which do not restrict winning customers (though Betfair charges a premium to some of its most successful users). Alternatively, they work with those bookmakers who use successful gamblers to improve the efficiency of their betting markets, and make most of their money on commission. These profess not to limit winning accounts and accept much bigger bets (Pinnacle, an influential bookie, often has a $1m limit for major events). Betting professionals also sneak in big trades via brokers, like Gambit Research, a British operation that uses technology to place multiple smaller bets with a range of bookmakers. Asian agents, in particular, have made their names in that trade: many are able to channel sizeable bets to local bookies anonymously. Unlike the sports they love, the games played by professional gamblers and bookmakers are kept out of the spotlight.
The world’s highest-earning football clubs
Manchester United retained their title as football’s highest-earning club when Deloitte, a consultancy, released its annual Football Money League rankings in January 2018. The Red Devils failed to qualify for the 2016–17 season of the lucrative Champions League, and had to settle for winning the Europa League, a second-tier international club competition. Nonetheless, even though the club’s success on the pitch paled in comparison with that of Real Madrid, who won both the Champions League and Spain’s La Liga, the broadcasting might of the English Premier League enabled Man U to remain at the top of the financial league table.
Wealth goals
Richest football clubs
Source: Deloitte
Deloitte’s rankings combine commercial deals such as sponsorships and shirt sales, match-day revenues and broadcast income. An ever-increasing share of clubs’ turnover now comes from the latter, which reached an all-time high of 45% last year. It is no surprise, then, that English teams comprise half of the top 20. The 2016–17 season was the f
irst of a new three-year Premier League television deal worth around £2.8bn ($4bn) per season. This is more than twice the value of the broadcasting deals struck by any of the other “big five” European leagues (those in Spain, Germany, France and Italy). As a result, Southampton FC made more money last season than AS Roma, and Leicester City’s Champions League run pushed them above Inter Milan.
Overall, football is in fine financial health. Clubs’ revenues rose across the board last year, with the top 20 collectively raking in €7.9bn ($9.8bn) compared with €3.9bn a decade ago. But the richest continued to pull ahead. The richest three clubs had a combined gross of €2bn, more than the total turnover of the eleven clubs ranked 20–30. And with analysts warning that the next round of football-rights auctions in Europe will be less frenzied as viewers opt for cheaper internet-video services, the market may be reaching a peak – at least for now.
Speaking my language: words and wisdom
Why emoji have beneficial linguistic side-effects
The way the world’s languages are displayed digitally can be a topic of raging, if somewhat arcane, debate. Coders and designers may disagree over whether a particular script has differentiated upper and lower cases, or which set of accents it needs. But the latest discussion, about emoji (the icons used in electronic communications to convey meaning or emotion – think smiling yellow faces), has been stickier than most.
It is all to do with Unicode. This is a standard that assigns numbers and a corresponding description to the characters of the world’s alphabets, as well as to many other things, such as mathematical symbols. It allows different devices, operating systems and applications to show the same characters across thousands of languages, so that a WhatsApp message written in, say, Sanskrit on an iPhone in California can be read by a recipient using a Windows laptop in Kathmandu. The standard is managed by a non-profit group, the Unicode Consortium, which began operations in the early 1990s. It regularly adds more characters to the list, whether for ancient languages that academics want to use, or for modern ones with relatively few speakers or unusual characters. The Script Encoding Initiative, which was established by the University of California, Berkeley, has a list of 100 scripts from South and South-East Asia, Africa and the Middle East that have yet to be incorporated into Unicode.
The Unicode standard started listing codes for emoji in 2010. After emerging in Japan in 1999, emoji spread worldwide in the 2000s, but no operating system or messaging app had a common numbering or representation scheme. So Windows, Android and iOS not only use different graphical renditions of those smiling yellow faces (and rice bowls, etc), but also at one time coded them with different numbers. An emoji sent from one system might appear as a completely different emoji, or even as a blank rectangular box, on arrival. Fortunately, the Unicode Consortium stepped in to standardise the numbers used, even though the specific appearance depends on the receiving platform or application (which now includes Slack, Facebook and Twitter, as well as operating systems on different devices). The difficulty for Unicode is that demand for more emoji is growing. This is driven by the likes of Apple and Google, as well as by businesses, industries, individuals and interest groups keen to see a particular symbol represented. The American state of Maine supported the proposal to add a lobster emoji. All proposals for new emoji put to the Unicode Consortium must be discussed and voted upon.
Some of the consortium’s members worry that making decisions about new emoji is distracting them from more scholarly matters and delaying the addition of new characters from scripts both ancient and modern. Proposals for frowning piles of poo (the smiling version already exists) drew particular ire, and were described as “damaging … to the Unicode standard”, by Michael Everson, a typographer. Such concerns are exaggerated, however, says Mark Davis, co-founder of the Unicode Consortium. Although emoji occupy a disproportionate percentage of media attention, the consortium has created a separate committee to handle them. Mr Davis also notes that the focus on emoji has had beneficial side-effects. Many software products previously lacked Unicode support. But designers keen to incorporate emoji installed upgrades that, as a side-effect, also allowed the display of Unicode characters in hundreds of languages that would otherwise have been ignored.
How the letters of the alphabet got their names
There seems to be little predictability to the English names for the letters of the alphabet, to say nothing of the names of letters in other languages. Some begin with an e-as-in-egg sound (eff, ell); some end in an ee sound (tee, dee); and others have no obvious rhyme or reason to them at all. How did they get that way?
The vowels are all named after their long forms. In Middle English, these were roughly ah, ay (as in “may”), ee, oh, oo (as in “tool”). But the “Great Vowel Shift” scrambled the long vowels of English over several centuries, starting in around 1400. This made English vowels sound different from those in Europe, and changed the letters’ names with them, to ay, ee, aye, oh. U was still called oo after the Great Vowel shift; only in around 1600 did it start being called yoo. The Oxford English Dictionary says of wy, also known as Y, merely that the name is of “obscure origin”. It is at least 500 years old.
The names of consonants are more regular than first appears. They use a modified form of the system handed down from Latin. “Stop” consonants – those that stop the airflow entirely – get an ee sound after them (think B, D, P and T). Consonants with a continuing airflow get an e-as-in-egg sound at the beginning instead (F, L, M, N, S, X). There are a couple of exceptions. C and G have both stop and non-stop (“hard” and “soft”) sounds, as seen in “cat” and “cent”, and “gut” and “gin”. They are called see and gee because in Latin they were only “stop” consonants and so follow the same naming rules as B and D. (Why they are not pronounced key and ghee is unclear.)
Other anomalies require a bit more explanation. R, which has a continuing airflow, used to conform to the rule above, and was called er. It changed to ar for unknown reasons. V was used as both a consonant and a vowel in Latin, and so does not fit the pattern above either: it is a fricative (a consonant in which noise is produced by disrupting the airflow), named like a stop. Double-U is a remnant of V’s old double-life, too. J did not exist in Latin; its English pronunciation is inherited from French, with some alternation. Zed comes from the Greek zeta. (Americans call it zee, perhaps to make it behave more like the other letter-names, though the exact reason is unclear.) And aitch is perhaps the greatest weirdo in the alphabet. Its name is descended from the Latin accha, ahha or aha, via the French ache. The modern name for the letter does not have an h-sound in it, in most places. But there is a variant – haitch – thought by some to be a “hypercorrection”, an attempt to insert the letter’s pronunciation into its name. In the Irish republic, haitch is considered standard; in Northern Ireland, it is used by Catholics, whereas aitch is a shibboleth that identifies Protestants. But it is not limited to Ireland: haitch is also spreading among the English young, to the horror of their elders.
Why Papua New Guinea has so many languages
India, with its 1.3bn people, vast territory and 22 official languages (along with hundreds of unofficial ones), is well known as one of the most linguistically diverse countries in the world. Yet it is no match for a country of just 7.6m inhabitants in the Pacific Ocean: Papua New Guinea. Nearly 850 languages are spoken in the country, making it the most linguistically diverse place on earth. Why does Papua New Guinea have so many languages, and how do locals cope?
The oldest group of languages in Papua New Guinea are the so-called “Papuan” languages, introduced by the first human settlers 40,000 years ago. Despite falling under the “Papuan” umbrella, these languages do not share a single root. Instead, they are split into dozens of unrelated families (and some isolates – single languages with no relatives at all). This contrasts with Papua New Guinea’s Austronesian languages, which arrived some 3,500 years ago, probably from a single Taiwanese source. Things were furth
er complicated in the 1800s by the arrival of English-and German-speaking colonists. After independence, Papua New Guinea adopted three official languages. English is the first; Tok Pisin, a creole, is the second; Hiri Motu, a simplified version of Motu, an Austronesian language, is the third. (Sign language was added in 2015.) But the lack of state recognition did not quash variety. The country’s 850-odd languages each enjoy between a few dozen and 650,000 speakers.
Many of these languages have survived thanks to Papua New Guinea’s wild topography. Mountains, jungles and swamps keep villagers isolated, preserving their languages. A rural population helps too: only about 13% of Papuans live in towns. Indeed, some Papuans have never had any contact with the outside world. Fierce tribal divisions – Papua New Guinea is often shaken by communal violence – also encourage people to be proud of their own languages. The passing of time is another important factor that has promoted linguistic diversity. It takes about a thousand years for a single language to split in two, says William Foley, a linguist. With 40,000 years to evolve, Papuan languages have had plenty of time to change naturally.
In the face of this incredible variety of languages, Papuans have embraced Tok Pisin, a creole based on English, but with German, Portuguese and native Papuan languages mixed in. It started as a pidgin, developed by traders in the 19th century. (Its name is a pidginisation of “talk pidgin”.) But in recent decades it has become the main language in Papua New Guinea. There is a Tok Pisin newspaper, and it is popular in church. Tok Pisin is now spoken by 4m Papuans, a majority of the population. Its root as a pidgin helps explain its success: simple vocabulary makes it easy to learn. Its mixed heritage also makes it dazzlingly expressive. Pikinini means “child” and comes from Portuguese. The Tok Pisin for “urbanite” is susok man – “shoe sock man” in English. Yet Tok Pisin’s success may threaten Papua New Guinea’s linguistic diversity: it is also slowly crowding out other languages. A dozen have already vanished. As a modern Papuan language flourishes, ancient ones risk being lost for ever.