Book Read Free

Seriously Curious

Page 17

by Tom Standage


  Is Serbo-Croatian one language or four?

  Around 17m people in Bosnia, Serbia, Croatia and Montenegro speak variations of what used to be called Serbo-Croatian or Croato-Serbian. Officially, though, the language that once united Yugoslavia has, like the country, ceased to exist. Instead, it now has four names: Bosnian, Serbian, Croatian and Montenegrin. But are these really all the same language?

  The answer, according to a group of linguists and NGOs from the four countries, is a resounding “yes”. Working under the banner of a project called “Language and Nationalism”, the group issued a “declaration on the common language” in 2017. It stated that the four tongues together form a “polycentric” language, similar to English, German or Arabic. They argue that although different dialects exist, these are variations of the same language, because everyone who speaks it can understand one another. Indeed, this makes the four tongues more similar than the dialects of many other polycentric languages. The authors consider the insistence by educational and public institutions on the usage of only one of the four name variants to be “repressive, unnecessary and harmful”. The aim of the declaration is to stimulate discussion on language “without the nationalistic baggage and to contribute to the reconciliation process”, said Daliborka Uljarevic, the Montenegrin partner behind the declaration.

  The insistence on calling Serbo-Croatian by four different names leads to endless absurdities. Children who live in the same town in Bosnia go to school in the same building but to classes in different languages. The Bosnian government portal is published in four languages: English, Bosnian and Croatian, which are written in the Latin alphabet, and Serbian, which is written in Cyrillic script. Yet the region’s politicians do not need translations when meeting. When war criminals are on trial before the UN tribunal in The Hague, they receive interpretation in the dialect spoken by the translator who happens to be on duty. A well-circulated meme from Bosnia highlights the absurdity: it features cigarette packets that repeat “smoking kills” twice in the Latin script and once in Cyrillic, all spelled identically.

  As in so many parts of the world, the tussle over language is political. Nationalist Serbs see the 2017 declaration as an attempt to undermine the link between Serbs in Serbia, Bosnian Serbs and Montenegrins. Defusing the language issue would take away a tool the nationalists have used to stir trouble; it emphasises differences. Nationalist Serbs fear that if everyone thought they spoke the same language in Bosnia, that would undermine their political ambition of eventually destroying the country. Nationalist Croats trace the struggle for independence, in part, back to the struggle of academics in the 1960s who claimed that Croatian was a separate language. If it were, then Croats must be a separate people, and hence not Yugoslavs, they argued. Yet most ordinary people do not care much about the issue. When they ask if you speak their language, more often than not, they call it simply naški, “ours”.

  How language is bound up with national identity

  The rise of populism in Europe and the United States in recent years has revealed how deeply divided voters are over immigration. Nationalists and populists, from Donald Trump to Britain’s UK Independence Party and Alternative for Germany (AfD), have proclaimed that governments should make keeping foreigners out a priority. But pinning down exactly what defines a foreigner, and what defines a national, is tricky. This is partly because identity is based on a nebulous mix of values, language, history, culture and citizenship.

  A poll published in February 2017 by the Pew Research Centre, a think-tank, attempted to unravel the idea of how someone can be judged to be genuinely American, British or German. It asked respondents about various characteristics – language spoken, customs observed, religion and country of birth – and how important they were to being a national of their country.

  National expression

  How important is the following for being truly (nationality)?

  Apr–May 2016, % responding very important

  Sources: Pew Reasearch Centre

  *Catholic

  On average, over the 15 countries surveyed, speaking a state’s national tongue was seen as the most important trait. The Dutch rated this higher than anyone, whereas Canadians were least concerned about linguistic ability, with only half saying that being able to converse in English or French (one of the two national languages) was very important. One reason may be that Canada is divided by language; another is that, along with Australia, it had the largest share of people born abroad among the countries polled, at over 20% of the population.

  Recent experiences with immigration appear to affect different countries in different ways. People in Greece and Hungary, which have been transit countries for large numbers of migrants from the Middle East, placed strikingly high importance on sharing customs and traditions, and being born in the country (Greeks also cared strongly about being Christian). Yet in Germany, the ultimate destination for many of the refugees and migrants, respondents gave comparatively little weight to such factors. That suggests that there may still be life in Germany’s Willkommenskultur (“welcoming culture”) – or at least that the AfD party still has some way to go before becoming a real contender for power.

  How machines learned to process human language

  Gadgets that can understand and respond to spoken commands are growing in popularity. Amazon’s Echo devices, featuring a digital assistant called Alexa, can be found in millions of homes. Ask Alexa to play music, set a timer, order a taxi, tell you about your commute or tell a corny joke, and she will comply. Voice-driven digital assistants from other big tech firms (Google Assistant, Microsoft’s Cortana and Apple’s Siri) have also vastly improved. How did computers learn to process human language?

  The original approach to getting computers to understand human language was to use sets of precise rules – for example, in translation, a set of grammar rules for breaking down the meaning of the source language, and another set for reproducing the meaning in the target language. But after a burst of optimism in the 1950s, such systems could not be made to work on complex new sentences; the rules-based approach would not scale up. Funding for so-called natural-language processing went into hibernation for decades, until a renaissance in the late 1980s.

  Then a new approach emerged, based on machine learning – a technique in which computers are trained using lots of examples, rather than being explicitly programmed. For speech recognition, computers are fed sound files on the one hand, and human-written transcriptions on the other. The system learns to predict which sounds should result in what transcriptions. In translation, the training data are source-language texts and human-made translations. The system learns to match the patterns between them. One thing that improves both speech recognition and translation is a “language model” – a bank of knowledge about what (for example) English sentences tend to look like. This narrows the system’s guesswork considerably. In recent years, machine-learning approaches have made rapid progress, for three reasons. First, computers are far more powerful. Second, they can learn from huge and growing stores of data, whether publicly available on the internet or privately gathered by firms. Third, so-called “deep learning” methods have combined faster computers and more abundant data with new training algorithms and more complex architectures that can learn from example even more efficiently.

  All this means that computers are now impressively competent at handling spoken requests that require a narrowly defined reply. “What’s the temperature going to be in London tomorrow?” is simple (though you don’t need to be a computer to know it is going to rain in London tomorrow). Users can even ask in more natural ways, such as, “Should I carry an umbrella to London tomorrow?” (Digital assistants learn continually from the different ways people ask questions.) But ask a wide-open question (“Is there anything fun and inexpensive to do in London tomorrow?”) and you will usually just get a list of search-engine results. As machine learning improves, and as users let their gadgets learn more about them specifically, su
ch answers will become more useful. Privacy advocates worry about the implications of being surrounded by devices that are constantly listening. But if the past few years of smartphone use are any indication, consumers are happy to set aside such concerns in return for the convenience of being able to operate a computer simply by speaking to it. Indeed, it is rather like casting a spell: say the right words, and something happens. That is the magic of machine learning.

  Why the World Bank needs to cut down on the word “and”

  An unusual war of words flared up in early 2017 at the World Bank. Paul Romer, its new chief economist, was stripped of control of the research division. An internal memo claimed that the change was to bring the operations department closer to the Bank’s research arm. But many suspected that it was because Mr Romer had clashed with staff over the Bank’s writing style. He had demanded shorter, better-written reports. In particular, Mr Romer questioned the excessive use of the word “and”. He proclaimed that he would not clear a final report for publication if “and” made up more than 2.6% of the text. His tenacious approach was said to have rubbed some employees up the wrong way. Was Mr Romer’s complaint justified?

  The prevalence of “and” is hardly the only or indeed the best measure of good writing style. But used to excess, it can render prose turgid or, at worst, unreadable. One of the Bank’s reports from 1999 promised to “promote corporate governance and competition policies and reform and privatise state-owned enterprises and labour market/social protection reform.” The 2.6% limit set by Mr Romer roughly matches the prevalence of “and” in academic work. (By comparison, in a typical week’s print edition of The Economist, “and” accounts for just 1.5% of the text, excluding advertisements.)

  Conjunction dysfunction

  World Bank reports

  Sources: “Bankspeak: The Language of World Bank Reports 1946–2012” by F. Moretti and D. Pestre, 2012; World Bank

  A study by Franco Moretti and Dominique Pestre of the Stanford Literary Lab, a research outfit, analysed the language used by the World Bank since its founding in the 1940s. Back then, the average report roughly met the 2.6% standard. By 2012, however, the conjunction was taking up about 6% of the words in its reports. Other stylistic sins abounded. Acronyms had come to account for about 5% of reports, up from 3% in the 1970s. Financial terms, such as “fair value” and “portfolio”, had also become more popular. The World Bank’s report-writers face other difficulties, too. In 2014 the Bank’s number-crunchers highlighted the unpopularity of its studies: of the 1,611 documents they assessed, 32% were never downloaded by anyone. So Mr Romer was making a good point. If the World Bank wants its reports to be read, it could at least make them a bit more readable.

  The French argument over a new punctuation mark

  In France, questions of language often touch off fiery national debates. In 2016 reforms meant to simplify tricky spellings – including the optional deletion of the circumflex from some words – provoked outrage and an online protest called #JeSuisCirconflexe. In 2017, another bout of linguistic anguish provoked an intervention from the prime minister and alarm from the French Academy, the official guardian of the French tongue, over a “mortal peril” to the language. It stemmed from the publication of a third-grade grammar textbook featuring a rare punctuation mark. Why did this cause such distress?

  All French nouns require a gender, which is often unconnected to the thing itself. There is nothing especially masculine, for instance, about le bureau (the desk) or feminine about la table (the table). In other cases, a noun’s gender is derived from the biological sex of its referent: un directeur is a male director; une directrice is a female one. Since the 17th century, the rule for plurals has been that the masculine always trumps the feminine. The reason, according to an early member of the French Academy, is that “the masculine is more noble”. Therefore, if only one directeur joins a group of 500 directrices, they collectively become les directeurs. The grammatical dominance of the masculine in French frequently creates conflict. A commission was created in 1984 to feminise job titles in order to recognise the growing numbers of women working in traditionally male-dominated professions. Its recommendations were so detested that the French government did not make the feminisation of professions mandatory until 1998.

  The disputed textbook offered a solution to what some feminists believe is an example of the sexism encoded in the French language. In order to refer to both genders, it inserts a floating dot, known as an interpunct, after the masculine version of certain plural nouns, and follows it with the feminine ending. So the group of one male and 500 female directors, for instance, becomes les directeur·rice·s. Few paid attention in 2015 when the High Council for Gender Equality, a consultative state body tasked with promoting equal rights, proposed the fix in a list of recommendations on implementing gender-inclusive language. The backlash to the textbook’s publication was rather swifter. The (predominantly male) French Academy, created by Cardinal Richelieu in 1635, warned that this “aberration” would create “a confusion close to illegibility” and allow other languages to “take advantage to prevail”. Édouard Philippe, the prime minister, weighed in, asking ministers “not to make use of the so-called inclusive writing in official texts”.

  The controversy over gender-inclusive language came just as France grappled with its own #MeToo protests against sexual abuse and harassment, called #BalanceTonPorc (“Expose your pig”). More than 300 French teachers signed a manifesto saying that they would no longer teach the rule that the masculine dominates the feminine. Technology, too, is playing a role in helping to regularise gender-inclusive language, despite the warning cries from the French Academy. The French Association of Normalisation, a national standard-setting body, said that it is designing a new French keyboard that will include an interpunct. There are good reasons to do so. Several studies suggest that gender-inclusive language can help reduce gender stereotyping and discrimination; others suggest a link between gendered languages and lower rates of female workforce participation. Whether or not the interpunct catches on, that is a very good point.

  Seasonal selection: festivals and holidays demystified

  Where new year’s resolutions come from

  At the start of every year millions of people make resolutions promising improvements in their lives. Alcohol is forsworn, exercise embraced, hobbies sought. But though it may make sense to respond to the indulgences of Christmas with catharsis, the tradition of new-year resolutions is far older than the establishment of the Christian festival, or even the placing of the new year in the middle of winter.

  The Babylonians were the first civilisation to leave records of new-year festivities, some 4,000 years ago. Their years were linked to agricultural seasons, with each new year beginning around the spring equinox (late March, by our modern calendar). A 12-day festival to celebrate the renewal of life, known as Akitu, marked the beginning of the agrarian year. During Akitu people keen to curry favour with the gods would promise to repay their debts and to return borrowed objects. In a similar vein the ancient Egyptians would make sacrifices to Hapi, the god of the Nile, at the beginning of their year in July, a time when the Nile’s annual flood would usher in a particularly fertile period. Offering sacrifices and prayers, they would request good fortune, rich harvests or military successes.

  The Romans continued the habit, but also changed the date. The Roman year is said to have originally had ten months, starting in March around the spring equinox, plus another 60-odd winter days that were not included in the named months. Around 700BC, two more months were added, but it was not until 46BC, when Julius Caesar introduced a reformed calendar, that January was officially established as the beginning of the year. Because this was the date on which the city’s newly elected consuls began their tenure, it marked a shift in calendric emphasis from agrarian cycles to civil rotations. Roman new-year festivities included the worship of Janus, the god of beginnings and endings, after whom the month of January is named. But the
persistence of these traditions annoyed later Christians, and in medieval Europe attempts were made to celebrate the new year on dates of religious significance, such as Christmas, or the Feast of the Annunciation in March. Attitudes to resolutions also changed. Prayer vigils and confessions were used to pledge allegiance to religious values. At the end of Christmas feasts, some knights were said to have taken an oath known as “The Vow of the Peacock”, in which they placed their hands on a peacock (a bird considered noble) in order to renew their commitment to chivalry. This moral flavour to the pledges persisted. A 17th-century Scotswoman wrote in her diary of taking Biblical verses as starting points for resolutions (“I will not offend anymore”).

  By the time the phrase “new-year resolutions” first appeared, in a Boston newspaper in 1813, the pledges were losing their religious overtones. An article published a few years earlier in Walker’s Hibernian Magazine, Or, Compendium of Entertaining Knowledge, an Irish publication, satirises the practice. It states that doctors had solemnly pledged to “be very moderate in their fees” and statesmen to “have no other object in view than the good of their country”. Yet the making of unrealistic, over-optimistic pledges has remained a tradition. According to polls, around half the population of Britain and America now make resolutions – but, with less fear of divine retribution to motivate them, fewer than 10% keep them.

 

‹ Prev