The Weird CEO

Home > Other > The Weird CEO > Page 8
The Weird CEO Page 8

by Charles Towers-Clark


  When questioned about job satisfaction or career goals, older millennials scored considerably higher than other generations on the need for freedom to dress as they want, work in an open environment, have their own office, have the option to work remotely, be reimbursed for tuition costs and have bonuses adapted to their performance. In terms of salaries and compensation, the younger Millennials and Generation Z have similar expectations to those of Generation X. However, the older millennials have expectations of approximately 20% higher salaries.

  An obvious improvement in our working lives is our understanding of the need for equality for women in the workplace. Strangely, in the millennial generation, the gap between women and men was highest for the desire to achieve a high salary, position or leadership (with men wanting more) compared to every other generation. Why this is, I am not sure, but Generation Z view gender equality as a non-negotiable right, so I would hope that this difference is due to a positive choice by women rather than a glass ceiling.

  Over the last 70 years, soft benefits have entered the working person’s consciousness, such that these are now key to the choice of employment. The EY study shows this for the three youngest generations.

  Top three benefits wanted from employees:

  Generation Z

  Younger Millennials

  Older Millennials

  Health Insurance coverage

  Feeling my ideas are valued

  Health Insurance coverage

  Feeling my ideas are valued

  Health Insurance coverage

  Work-life balance

  Recognition for my contribution

  Work-life balance

  Vacation/paid time off

  This study shows that, as the new generations enter the work force, they expect to be able to add, and be recognised for, their value. To achieve this, it is necessary to push ownership and responsibility down the ladder by the implementation of WEIRD practices (Wisdom, Emotional intelligence, Initiative, Responsibility, Development [self]).

  This will help us to overcome the technological changes of the next fifteen years and beyond. It is ironic that, having finally reached a point where people have choices around how and on what they want to work, that choice could be taken away by the technology that made it possible. Technology has already changed our way of working, but the introduction of Artificial Intelligence is the elephant in the room that cannot be ignored.

  Its future impact is, of course, unknown, but Artificial Intelligence requires data. However, it is not clear is from a social standpoint who will control ownership of the data and therefore own the output of any Artificial Intelligence programs?

  B)

  WHO OWNS THE DATA IN A DATA DRIVEN WORLD?

  “Those who rule data will rule the entire world.”

  Masayoshi Son – CEO Softbank

  The value of computers is not in their processors, but rather in the data that they collect. Likewise, the value of many companies is now based on their ability to use data that they collect. The appetite for data appears to be insatiable – 90% of information ever produced has been generated in the last two years.[lxxxiv] This trend is not going to slow down as more data are collected to provide personalised advertising, social media, medical treatments – in fact all aspects of our life. It is not the computers demanding this information, but the designers of the software that encourage us to provide data (willingly or unknowingly). This can then be processed as part of a bigger data set in order to draw conclusions that may not be true – or may be more perceptive than we would care to admit.

  Yuval Noah Harari in his excellent 2015 book Homo Deus: A Brief History of Tomorrow talks about the Internet of All Things, focused around the belief of the truth of data.[lxxxv] He gives an example of how Facebook may know you better than anybody knows you – including your spouse. Facebook asked 86,220 volunteers one hundred questions about themselves. Based on the analysis of only 10 ‘likes’, Facebook could always predict each recipient’s answers better than a work colleague. After 70 ‘likes’, Facebook produced better predictions of the recipient’s answers than friends; 150 ‘likes’ beat family members and 300 ‘likes’ beat spouses. So, if we have more than 300 ‘likes’ on our Facebook account – then advice from Facebook will be more reflective of our personality than any human advice, including that from our partner. This brings a new angle to an adolescent blaming a friend for getting in trouble – “but Facebook said I should do it”.

  My children are doing most of their learning at school on computers. As far as I know, each piece of work is not being recorded in an ever-growing database about their life. But it wouldn’t be difficult to do. If it were recorded throughout their time at school, there would be a huge database that not only provides a progression of their education but could also give a very accurate assessment of how their character and thoughts have matured over the years. Add that to the children’s genome sequence, Facebook personality assessment and their web searches and the result is a mountain of data.

  If all the data were fed into some larger databases of other children, and continued into adulthood, then the addition of some Machine Learning could produce a very accurate prediction of the type of job they would excel at and an excellent profile to provide to a dating app to find their perfect mate. But would a dating app with all this information be better than relying on romanticism?

  The philosopher Alan de Botton in a talk on Romanticism[lxxxvi] pointed out the danger of the modern-day concept of a marriage based on romanticism which replaced the marriage of reason around 1750. Romanticism puts much emphasis of choosing a partner because instinctively it feels right. This compares to a relationship of reason based on, for example, neighbouring land. Many of these historic marriages of reason were a disaster (normally for the women), but could a matching app (which is the modern-day equivalent of overly zealous parents) be more successful than relying on the instinctive choice? As a product of the romantic marriage era, I have a belief in human irrational behaviour. However, my scientific side says exactly that – it is irrational, inefficient and a poor way to make a decision. Why, with all the data available, would the next generation choose to make instinctive decisions?

  Yuval Harari explores the viewpoint of ‘dataism’ believers – those who worship data over any other God, envisaging an all-encompassing cosmic data processing system that will cause Homo sapiens not to exist in its current form but as a super-human race. This would seem rather futuristic, but if we take the processed data mentioned above and find a way to incorporate it into our brain to use in daily life – we would be closer to self-awareness than anybody in the history of man. But who would own the data?

  As we move towards a scenario where almost everything will provide data, the opportunity for creating efficiencies is huge. As an example, aggregated health data will result in personalised medicine, which will create incentives for the medical profession to focus on prevention rather than cure – thus saving huge amounts of money as well as lives.

  An individual’s health data alone is of limited value to others. But once it is joined with data from thousands of others, valuable trends begin to emerge.

  23andMe provides DNA analysis for anybody who wants to research their lineage. For $99 and a saliva sample, the company can provide the details of your forefathers. It is now one of a handful of health companies with a valuation of over $1 billion but this value is not in the ‘know your ancestor service’ – but in the fact that people are voluntarily sending their saliva to be tested for DNA. To be fair to the company, they appear to have a very strict opt-in policy regarding the use of the DNA for other research purposes – but apparently 80% of people do opt in. This allows the company to sell the data to pharmaceutical companies. If this is then used to aid the development of better drugs, it could be argued that it’s a good thing. However, it is clear that the data do not remain personal.

  So when does a good thing veer on the side of evil? As has already happened wi
th the collection of consumer data, there is a tipping point after which it is difficult to compete with the existing holders of Big Data.

  I was asked by a Silicon Valley lawyer whether our company was pre- or post-revenue? I understood the words but couldn’t understand why a company would want to be pre-revenue – isn’t the point of companies to sell things? Not necessarily! Well-funded start-ups are grabbing customers, regardless of the cost, in the hope that the data gained will more than pay back the cost of acquiring these customers. The dream of many new start-ups is to find a niche outside the interest of Google, Apple, Facebook or Amazon and to grow a sufficient customer base of data to then be bought by one of these companies. It is this approach to building companies that makes it extremely difficult for revenue-creating, organically grown companies to survive and why the lure of investors with deep pockets allowing companies to buy customers (or data) is so hard to resist.

  But how can these companies say they own the data? Similar to 23andMe, in the use of various services we agree to allow our data to be used for other purposes. In theory, it may be possible to opt out, but in so doing so we limit what we can and can’t do online. When pushed, the likes of Facebook will resort to the argument that if people don’t want their data to be used, they can stop using Facebook. However, for many, this would restrict their social life severely. For some teenagers, stopping Instagram (owned by Facebook) could result in exclusion or bullying at school. Thus, these four companies have reached a tipping point where it is almost impossible for others to compete.

  They have huge amounts of data on millions of individuals that they use for their own profit.[lxxxvii] Further, they have created an exclusive ownership class from which they can create an array of more and more efficient services by using Artificial Intelligence to mine the data that they hold.

  Considering that these services will make our lives more efficient, and thus we will want to buy them, how do we stop this being used to the detriment of the consumer?

  C)

  THE LITIGATION & REGULATION PARADOX

  “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.”

  Stephen Hawking – Theoretical Physicist

  As mentioned above, we are moving into an era where those who hold the data could potentially control, or at least influence, much of our lives with the use of Artificial Intelligence programs. To date computer programs are passive; the worry of Artificial Intelligence is that it can make them active.

  In July 2017, Elon Musk said, “Artificial Intelligence is a fundamental risk to the existence of human civilisation.” He added that Artificial Intelligence is one of the most pressing threats to the survival of the human race.[lxxxviii] So, should it be regulated?

  In January 2017 in Asilomar, California, a set of 23 principles were formulated in the hope of ensuring Artificial Intelligence is used beneficially for human kind. More than 1,200 Artificial Intelligence researchers signed up to adhere to the Asilomar principles.[lxxxix] Despite his alarmist statements, Musk’s main point is that Artificial Intelligence is advancing so fast that governments should get involved now to avoid scary scenarios wittingly or unwittingly created by programmers, rather than act with knee-jerk legislation later.

  Some of the Asilomar principles include:

  Science–Policy Link: There should be constructive and healthy exchange between Artificial Intelligence researchers and policy makers.

  Research Culture: A culture of cooperation, trust and transparency should be fostered among researchers and developers of Artificial Intelligence.

  Race Avoidance: Teams developing Artificial Intelligence systems should actively cooperate to avoid corner cutting on safety standards.

  Importance: Advanced Artificial Intelligence could represent a profound change in the history of life on earth and should be planned for and managed with commensurate care and resources.

  Risks: Risks posed by Artificial Intelligence systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

  So far, so nice. Within the Asilomar principles, there is a sub-section regarding ethics and values.

  Human Values: Artificial Intelligence systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms and cultural diversity.

  Human Control: Humans should choose how and whether to delegate decisions to Artificial Intelligence systems, to accomplish human-chosen objectives.

  The full Asilomar Principles are given overleaf.

  Table 4.1: The full Asilomar Principles

  GOAL

  DESCRIPTION

  Research

  Research Goal

  The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

  Research Funding

  Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics and social studies; such as:

  How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?

  How can we grow our prosperity through automation while maintaining people’s resources and purpose?

  How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?

  What set of values should AI be aligned with, and what legal and ethical status should it have?

  Science–Policy Link

  There should be constructive and healthy exchange between AI researchers and policy-makers.

  Research Culture

  A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

  Race Avoidance

  Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

  Ethics and Values

  Safety

  AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

  Failure Transparency

  If an AI system causes harm, it should be possible to ascertain why.

  Judicial Transparency

  Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

  Responsibility

  Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse and actions, with a responsibility and opportunity to shape those implications.

  Value Alignment

  Highly autonomous AI systems should be designed so that their goals and behaviours can be assured to align with human values throughout their operation.

  Human Values

  AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms and cultural diversity.

  Personal Privacy

  People should have the right to access, manage and control the data they generate, given AI systems’ power to analyse and utilize that data.

  Liberty and Privacy

  The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

  Shared Benefit

  AI technologies should benefit and empower as many people as possible.

  Shared Prosperity

  The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

  Human Control

  Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

  Non-subversion

  The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

&n
bsp; AI Arms Race

  An arms race in lethal autonomous weapons should be avoided.

  Longer-term Issues

  Capability Caution

  There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

  Importance

  Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

  Risks

  Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

  Recursive Self-Improvement

  AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

  Common Good

  Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organisation.

  I mentioned the modern-day philosopher, Alan de Botton previously, and defining these human values will provide plenty of material for him and other philosophers over the next few years. These philosophers are better equipped to define these human values than I am, so this book focuses on one of the questions asked within the Asilomar principles: “How can we grow our prosperity through automation whilst maintaining people’s resources and purpose?” The question of human purpose is a huge sociological challenge that we face over the next fifteen years.

 

‹ Prev