The Weird CEO

Home > Other > The Weird CEO > Page 4
The Weird CEO Page 4

by Charles Towers-Clark


  However, Ford grasped more than the power of electricity; he understood the power of marketing and, specifically, the importance of creating a product to signify status. He also recognised the importance of creating a consumer (or middle) class; and to help generate this class, he deliberately paid his workers sufficiently such that, in time, they could own their own Ford motor car[xxix].

  As depicted in the 2016 film Hidden Figures, the mathematicians who spent their working hours calculating complicated equations within NASA in the 1960s were known as ‘computers’. The end of the film marks the start of the third industrial revolution with the installation of an IBM computer at NASA to replace the human ‘computers’.

  Until the financial crisis of 2008, the automation of many tasks by computers was mostly about making incremental productivity improvements. Some jobs were lost but, with employment at a reasonably high level in Western countries, those displaced were able to find alternative employment and the productivity gains heralded more economic activity. A virtuous circle was created that economists strive to achieve, and for which politicians take credit.

  We are still in the midst of those productivity improvements, but we are also beginning to witness the fourth industrial revolution, which will fundamentally change the way we interact with computers.[xxx]

  Until recently, actions undertaken by a computer had to be instigated by a human being. Now, however, computers are undertaking tasks autonomously. For many tasks, computers act as a dumb (albeit very quick) data processing aide but they are starting to be used such that the physical, digital and biological worlds can interact in ways that we are only beginning to conceive. As Klaus Schwab mentions in the description of his 2016 book The Fourth Industrial Revolution, “we are even challenging ideas about what it means to be human”.[xxxi] Prosthetic arms controlled by thoughts alone; nano-technology used to replace worn human parts – technology is affecting the way we work and live in ways that I couldn’t have imagined when I started my company 20 years ago.

  B)

  DISRUPTIVE TECHNOLOGIES

  “Without question, intelligent technologies will continue to disrupt the world as we know it. There will be profound implications, both positive and negative.”

  Pierre Nanterme – CEO Accenture

  Since I have been running Pod Group, I have seen how technologies can disrupt whole industries and how people’s lives are re-shaped as a result. Pod was originally a provider in the mobile data communications industry (although we have now extended beyond this) which provides the backbone for many of the disruptive technologies that have appeared over the last few years. Our customers build solutions for the Internet of Things (IoT). However, before IoT, there was Machine-to-Machine (M2M) and prior to that, Telematics (as most applications were related to vehicles). Whilst each of these terms refers to different parts of remote monitoring, they have mostly superseded each other. By taking the example of monitoring vehicles, it is possible to gain not only a good picture of how technology has evolved, but also a glimpse of how industries (and workers) that depend upon vehicles will fare.

  GPS devices have been available for many years, but it was during the late 1990s and in the new millennium that the technology became cheap enough and sensitive enough to be used widely in businesses. GPS trackers were added to high value vehicles and other machinery (such as diggers) in case of theft. However, an intelligent thief found it reasonably easy to follow the wiring and cut power to the device.

  Since those early days, three different technological shifts have taken place that have changed the way electronics – including GPS devices – are used. Firstly, GPS chips (similar to other electronics), as well as being cheaper, have become more powerful which allows the device to detect a location inside and independent of weather conditions. Secondly, mobile networks, on which the location of the device is sent back to base, have become more widespread. Thirdly, battery technology has improved hugely,[xxxii] allowing manufacturers to add batteries that replace or complement fixed wiring. The progress of battery technology has allowed the creating of devices that can sit in the field for years without charging or human interaction and communicate occasionally using low powered radio networks.

  Due to better and cheaper technology, human imagination and initiative have been set free to create new markets by pulling together different technologies and methodologies.

  Let’s go back to the example of the GPS tracker. By combining the GPS with scheduling software (which has also improved hugely over the last twenty years due to greater computer processing power), Amazon is able to offer one-hour delivery slots in London compared to ‘sometime between nine and five’ in the past.

  However, disruptive business models don’t come about just as a result of combining technologies – they normally require a change in the environment. One reason that Amazon can employ so many delivery people, and Uber has been so successful, is the improvements in scheduling mentioned above combined with something out of their control – namely the reduction in the manufacturing cost of GPS units. This has enabled any smartphone to provide an accurate location – including each Amazon delivery person or Uber driver.

  During a recent visit to San Francisco, our Uber driver was a student who did not own a car. By using a car-sharing app, she rented a car by the hour and signed on to Uber and Lyft to offer rides. Working about five hours a day, she earns $100–$200 per shift. Having created employment for herself (albeit with help from Uber), with no capital investment and limited operational investment ($30 in advance to rent the car), she had an income of $2,000 - $4,000 per month working when she wanted to.

  In the United States, there are approximately 3.5 million truck drivers (but 8.5 million employed in the industry),[xxxiii] 1.3 million delivery drivers, 250,000 taxi drivers and 715,000 Uber and Lyft drivers.[xxxiv] I haven’t included hospital drivers, private taxis etc. Therefore, a conservative estimate of the number of professional drivers in the US would be 6 million. This equates to approximately 5% of the 126 million working population[xxxv] and doesn’t include indirect jobs related to the transportation industry. The UK has a similar percentage of drivers.

  These jobs will be lost to self-driving vehicles - I will explain why.[xxxvi]

  First the technology. Self-driving cars use hundreds of sensors to record everything going on around the car. These are combined with radar and cameras which transmit all this information to computers within the car. Then the clever stuff starts – the information is used both as a learning tool and a driving tool. Google has a fleet of vehicles that have collectively driven millions of miles gathering information about all the different scenarios that could happen on the road. For example, who could have predicted scenes such as a person in a wheelchair chasing a duck across the road… and then chasing it back again… and then chasing it in circles. If you are wondering what the car did – well it did nothing until the duck had been chased off the road.[xxxvii]

  Tesla has taken a different route. On its newer vehicles, Tesla runs its autopilot software in the background, which simulates the car driving autonomously in real time and compares what could have happened to what did happen. This information is sent to Tesla for each car manufactured after October 2016 (some earlier Tesla cars also send information). I would estimate that Tesla has over 2 billion miles (billion not million) of shadow data. After receiving the data, Tesla uses Machine Learning to improve its autonomous driving software; and then updates cars already in the field to test and further improve the new code.

  It would take the average driver about 166 million years to gain the same driving experience as that of a Tesla car.

  The number of accidents caused by Google Cars (now called Waymo) is practically zero. Almost all the accidents in which Waymo cars have been involved were caused by other drivers; likewise for Tesla.[xxxviii] Sadly, in March 2018, the first pedestrian fatality happened involving an Uber self-driving vehicle in autonomous mode which hit a pedestrian in Arizona. Unfortuna
tely, the accompanying human driver also failed to react. Accusations have arisen that autonomous cars are unsafe – and they need further improvement. However, the low number of accidents caused by autonomous vehicles compares to an average over the last couple of years of one accident per 150,000 miles driven by people – resulting in 2.35 million crash-related injuries per year in the United States and 200,000 in the United Kingdom.

  The law is finally catching up. By the time this book is published, self-driving cars will be permitted on roads within the United States, Estonia and other countries with some conditions. So as Elon Musk, the founder of Tesla, has questioned, “When will human driven cars be banned from the road as unsafe?”[xxxix] In time, driving will become a hobby in the same way as horse riding changed from a means of transport to a sport. I am not sure how motor racing enthusiasts will feel about a human racing driver being beaten by a self-driving car, but in the same way that IBM’s chess playing Deep Blue beat Garry Kasparov, the fastest driver in the world will soon be a computer.

  People at IBM have continued to try and disrupt the technology of the present. One tool that they are using to achieve this is the IBM Watson Artificial Intelligence platform[xl]. I always assumed that this was named after the Watson of Sherlock Holmes fame, but apparently it was named after IBM’s first CEO – Thomas J. Watson. IBM Watson is trying to replicate the human brain and combine it with huge processing power to allow developers to create applications that currently exist only in futurist films.

  As human beings we can either accept that technology is going to disrupt our lives, or we can bury our heads in the sand. However, it is not a computer that decides to be the fastest driver – but rather a person (or a team of people) that has the vision, embraces the future and will persist until they have achieved their goal. Generation Z and younger millennials are natively comfortable with technology, embrace disruptive technologies and will ensure that change happens. The opportunity for employers lies in understanding this, giving employees the freedom to make decisions and recognising them for the changes they make.

  It can be difficult for those from Generation X to appreciate that the speed of technology is increasing exponentially and company strategies need to change ever more rapidly. The last strategy I set out in our annual meeting lasted four months before we realised it didn’t fit the market any more. Part of the reason for this speed of change lies in the rate at which processing power is growing and opening up new opportunities. This is covered in the next chapter.

  C)

  COMPUTERS, BIG DATA & IOT

  “Computers are like Old Testament gods: lots of rules and no mercy.”

  Joseph Campbell – Author

  Computers cannot surpass the human brain in all aspects – which is the reason for this book. Regardless, many computer scientists are trying to simulate human thought processes in computer models using Cognitive Computing.[xli] I will return to the related practicalities and limitations later but, for now, it is worth looking at the developments and possibilities around cognitive computing, Artificial Intelligence and raw data processing power.

  In 1965, the co-founder of Intel stated that the number of transistors on a circuit board would double approximately every two years (this became known as Moore’s law). He was wrong in only one aspect – it was closer to eighteen months. To paraphrase this – every 18 months computers have double the processing power.

  It is this huge increase in processing power that allows us to run Artificial Intelligence software as well as more bloated software programs. Since 1985 when Windows 1.0 appeared, processing power has increased by between 10,000 and 100,000 times (depending on the chip).[xlii] Unfortunately, the latest version of Windows requires 8,000 times more power than Windows 1.0 did – which is why our computers aren’t exponentially quicker than in 1985.[xliii]

  A couple of years ago, I listened to a talk about an experiment aimed at creating a robot which could build a daughter robot to move 30 centimetres in the shortest time. The mother robot had various sizes of blocks, hinges and batteries available to stick together in order to find the fastest robot offspring. On each attempt, the mother would time the daughter’s progress before creating another daughter.

  As the experiment unfolded, it became clear that, rather than analysing the performance and making well thought out improvements, this was more of an exercise in trial and error. As the creation of each daughter was automated, the mother was able to create and test hundreds of different versions which had been stuck together in slightly different ways. It wasn’t so much an example of Artificial Intelligence as a demonstration of utilising raw processing power.

  The most efficient processing machine available today (when you take into account the energy required) is, and will be for some time, the human brain. So how can the efficiency of the human brain be combined with computer processing power so that the brain can be utilised for higher value tasks? Computers can organise, compute and run complicated software – but they are still following instructions rather than thinking. Elon Musk is trying to level the playing field of what he sees as the danger of computers (and specifically Artificial Intelligence) by starting a company called Neuralink[xliv] that is researching the possibilities of brain-machine interfaces (think controlling prosthetic limbs to start with – and controlling everything by thoughts alone after) by adding Artificial Intelligence to human brains.

  However, Neuralink’s promise of an integrated Human–AI future is some way off, so it is worth looking at how the brain and the computer complement each other today. There are a number of differences as explained in an article at ScienceBlogs:[xlv]

  Brains have an inbuilt retrieval system that can retrieve a full memory with a few cues. Computers by comparison need masses of storage and retrieval capability.

  Brains are not restricted to a limited short-term memory (or RAM)

  Neurons in brains are electrochemical – the chemical part of the signal adds a level of power that computers do not have with electrical signals alone.

  Processing and memory are performed together in the brain but separated in a computer.

  The brain is a self-organising system.

  Brains have bodies which provide senses and other advantages.

  Brains are bigger than any current standard computer.

  This simplified list of differences indicates why human brains are considerably more advanced than computers. Although computers will become more powerful, human brains will continue to have the ability to retrieve, sense and organise in a manner that, in computers, will be restricted by the amount of data and processing power available. It is these human capabilities that need to be encouraged in the workplace, while maximising the use of computers to undertake routine tasks that don’t require human intervention. However, the use of these capabilities relies on people being given the freedom to make their own decisions – and to retrieve, sense and organise their thoughts in the way they see fit for the benefit of each individual task or project. Using these capabilities will allow us to make best use of what computers can do.

  So, what can a computer do?

  First and foremost, computers are very good at following instructions. So long as the data are inserted into the computer in a format it can understand then it will be able to process the data as fast as its processor will allow. Sooner or later it will regurgitate the information you require. However, if the information that the computer is trying to process isn’t exactly as it expects, then you can experience the frustration that leads to wanting to throw the damn machine out of the window (remembering to open the window first!).

  Currently, when an application doesn’t perform as expected, a person needs to look at the code and make a fix. In the future, however, this role will be replaced by a program designed for that purpose.

  Companies providing Cloud computing servers (with huge warehouses of computers) would until recently have, for example, 200 engineers maintaining these servers. Now that is likely to b
e nearer to five engineers. Computer engineers have written programs that constantly monitor all servers, identify broken bits of code, decide on the best fix and automatically apply the appropriate patch.

  Although this monitoring application has replaced around 195 engineers, it is not making any decisions on its own. The actions that it can take are dependent upon the decisions of the person who wrote the application. The reason for the continued employment of the remaining five people is to replace physical parts and to add or fix code if an unexpected bug arises. As more and more fixes are coded, the team may reduce to, say, three people. Which brings up an interesting motivational quandary when you know you are coding yourself out of a job?

  So, what else can a computer do? Actually, not a lot. A computer will only do what it has been programmed to do.

  However, adding Artificial Intelligence and more specifically Machine Learning changes a computer from being dumb to smart(er) – this is covered in the next chapter. A standard application’s purpose is to provide pre-set responses to pre-set commands. A Machine Learning application will allow for questions (commands) that may not fit the pre-set responses and try to calculate the best answer. Although it may be very frustrating to go through twelve suggestions (none of which work) before you can ask a person why your computer won’t play Netflix on your TV, you have provided valuable information to a Machine Learning application – namely that none of the suggestions worked. If you had solved your problem on the third suggestion and many other people experienced the same issue at the same time, then the application will have learnt that a significant number of people are suffering a known issue and an update needs to be sent to all computers running that particular version of the software.

 

‹ Prev