Book Read Free

Digital Marketplaces Unleashed

Page 84

by Claudia Linnhoff-Popien


  Agile IT

  Agile IT is intended to be a flexible and agile delivery organization to better respond to the needs of those businesses wanting to explore new and more disruptive ideas while moving fast to beat the competition. Agile IT focuses on digitalisation and flexibility to support Vattenfall in optimizing customer experiences, developing new business models and achieving operational excellence by providing mobile and self‐service solutions. Agile IT leverages cloud and big data analytics. Agile IT supports the business in trying out new ideas and technology that may in the future be part of their strategy.

  Core IT

  Core IT shall meet the high security and stability requirements of businesses such as nuclear and hydro generation as well as staff functions, implementing more incremental changes. Its focus is to deliver and operate stable, reliable and compliant application and infrastructure services in a cost‐efficient way.

  Besides the CIO Office, which is in charge of overall IT strategy, set‐up and communication the new Vattenfall IT organization comprises two additional key elements:

  Transformation Office

  A Transformation Office manages technology strategies, architecture, and security and secures successful execution of divestments, acquisitions, and group‐wide transformations (e. g. consolidation of the various systems in use for Enterprise Resource Planning – (ERP)).

  Service Integration

  Service Integration manages a portfolio of delivery models and is in charge of performance management and metrics, sourcing, vendor management and governance, service management and project portfolio management.

  54.4 Monitoring Success

  For the implementation of “two‐speed IT” at Vattenfall, stability is a priority. The business will be in the lead of IT transformation and changes are driven by business demand. The IT department aligns its plans and approach with all current major business change projects to avoid or minimize disruptions. The organizational adjustments will be gradual – no big bang! Smaller changes are being executed during 2016 aligned to target picture. The implementation is in tune with the overall pace of Vattenfall’s business transformation and risks are actively managed.

  Vattenfall IT continues to survey users and business managers to measure their satisfaction and identify and address issues before they grow into problems.

  54.5 Conclusion

  A traditional corporation in an “old” industry such as Vattenfall in the utilities must balance innovation with the need for stability. While digitization can start with many small projects in different parts, there is a need to establish overarching governance and formulate a strategy.

  The IT organization must respond by adapting their approaches, organizational structure, employee skills, culture and sourcing strategy to remain relevant and transform from service provider to value creator. The implementation must be well aligned with the business to minimize disruption, make it possible to learn from pilot projects, and continuous improvement [8].

  Driving cultural change is a key factor for successful digitization of a traditional company. This includes engaging in proactive and transparent communication with external stakeholders via social media – instead of relying exclusively on classic public relations and advertising. Employees have been addressed with numerous roadshows and training offerings ranging from traditional courses to short video clips on the intranet. Executive sponsors have contributed to the credibility of these messages.

  Development and marketing of new products and services must be as customer‐centric as possible, realizing that today’s customers can choose and compare a wide array of options to meet their needs. They have enormous transparency, so instead of traditional marketing pushing offerings to consumers, companies must engage in conversations with their (potential) customers, learn about their needs and respond to their feedback to create “pull” for their offerings. The internet and social media provide many platforms to amplify “word of mouth”. In response to these developments, it is no longer sufficient to manage clients with the main goal of customer satisfaction – as measured by the Customer Satisfaction Index (CSI). Instead, companies should rather aim to maximize customer engagement – as measured by the Net Promoter Score (NPS) [9].

  While promising quicker delivery of tangible results, an agile approach is also demanding a constant high level of engagement from business and IT: When following the traditional “waterfall” approach, project phases last for months, and the activity level heats up towards the end of each phase, with business primarily engaging in sign‐off, testing and training, the agile approach requires a high level of engagement to ensure delivery of the agreed product increment at the end of each sprint every two weeks.

  Vattenfall’s journey towards more sophisticated use of business intelligence has exposed the need for solid master data management as the foundation for generating credible insights from combining data from various sources. IT needs to support this effort by developing and implementing an architecture that allows combining data from systems of record (e. g. SAP ERP), operational systems (e. g. SCADA), systems of engagement (e. g. Facebook and Twitter) and other external sources. A challenge is the shortage of data scientists who are able to make sense of these new opportunities.

  By sharing the above use cases and best practices, the authors wish to support other organizations in their effort to make best use of their IT services.

  References

  1.

  “www.​voice-ev.​org/​sites/​default/​files/​VOICE_​Handout_​2016.​pdf,” 2016. [Online]. [Accessed 20 October 2016].

  2.

  “www.​vattenfall.​com,” [Online]. [Accessed 20 October 2016].

  3.

  M. Böhm, “IT-Compliance als Triebkraft von Leistungssteigerung und Wertbeitrag der IT,” HMD-Praxis der Wirtschaftsinformatik, Issue 263, pp. 15–29.Crossref

  4.

  S. Helmke und M. Uebel, Managementorientiertes IT-Controlling und IT-Governance, Wiesbaden: Springer Fachmedien, 2016.Crossref

  5.

  A. Wiedenhofer, “Flexibilitätspotentiale heben – IT-Wertbeitrag steigern,” in HMD-Praxis der Wirtschaftsinformatik, Issue 289, 2013, pp. 107–116.

  6.

  M. Skilton, Building the Digital Enterprise, Palgrave Macmillan, 2015.Crossref

  7.

  W. Van Grembergen, From IT Governance to Enterprise Governance of IT: A Journey for Creating Business Value Out of IT, IFIP, 2010.

  8.

  M. Nilles und E. Senger, “Nachhaltiges IT-Management im Konzern – von den Unternehmenszielen zur Leistungserbringung in der IT,” HMD-Praxis der Wirtschaftsinformatik, Issue 284, pp. 86–96.

  9.

  P. Samulat, Methode für Messungen und Messgrößen zur Darstellung des “Ex-post” Wertbeitrages von IT-Projekten, Wiesbaden: Springer Fachmedien, 2015.

  Further Reading

  10.

  R. Kohli und V. Grover, “Business Value of IT: An Essay on Expanding Research Directories to Keep up with the Times,” Journal of the Association for Information Systems, Vol 9, Issue 2, pp. 23–39, January 2008.

  11.

  J. vom Brocke und T. Schmiedel, BPM-Driving Innovation in a Digital World, Springer, 2015.Crossref

  Footnotes

  1“e. V.”: a registered association under German law with full legal personality.

  © Springer-Verlag GmbH Germany 2018

  Claudia Linnhoff-Popien, Ralf Schneider and Michael Zaddach (eds.)Digital Marketplaces Unleashedhttps://doi.org/10.1007/978-3-662-49275-8_55

  55. Don’t Lose Control, Stay up to Date: Automated Runtime Quality Engineering

  Thomas Gabor1 , Marie Kiermeier1 and Lenz Belzner1

  (1)Ludwig-Maximilians-Universität München, Munich, Germany

  Thomas Gabor (Corresponding author)

  Email: thomas.gabor@ifi.lmu.de

  M
arie Kiermeier

  Email: marie.kiermeier@ifi.lmu.de

  Lenz Belzner

  Email: belzner@ifi.lmu.de

  55.1 Introduction

  Modern business requirements are highly volatile. Smart factories are considered to realize production lines able to reach lot size one. This enables highly flexible adjustment to customer requirements and needs. Financial markets react to available information within moments. Electricity demand is changing at minimal rates, with high variance, while production capabilities in the smart grid are depending on weather conditions, and thus also constantly changing. Modern industrial systems have to cope with these settings, both effectively and safely.

  The raise of digitization in terms of sensory information and infrastructure for data aggregation and distribution has put massive amounts of valuable data at the fingertips of system designers. However, the complexity and amount of available information in combination with ever‐changing situations and requirements heavily impacts the effectiveness of classical approaches to designing and operating systems. Identifying currently relevant value in massive amounts of data is no longer feasible at design time.

  To this end, software has to analyze and transform runtime data into decisions about system configurations, reorganization and adaptation. This gives rise to a number of questions: How can data analysis be performed effectively, given the sheer amount of available data?

  How can analysis be performed accounting for current business and customer requirements?

  How to transform analysis into system decisions?

  It has been shown that autonomous systems endowed with the capability to learn and self‐organize can provide a promising approach to tackling the increasing complexity of software engineering [1]. However, increasing flexibility of systems severely impacts classical mechanisms of quality assurance. This challenge yields additional questions to be answered. How to ensure performance, if runtime conditions are not exactly known when designing the system?

  How to assure that a system meets its qualitative requirements, even though it is allowed to reconfigure itself according to situations that only arise at runtime?

  How to test a system reorganizing itself at runtime?

  In this Chapter, we want to sketch potential approaches to answering these questions. We will discuss how systems can be made “smarter” by enabling them to know about their ultimate goals and to learn how to fulfill them best. This will lead us to the important question how self‐learning processes can be always kept in check and how we can engineer them in our best interest.

  55.2 Getting Value from Your Runtime Data

  In this Section, we outline an approach to enable software driven systems to transform data becoming available at runtime into decisions about configuration and reorganization that increase or maintain the potential to satisfy requirements and remain concordant with a given specification.

  55.2.1 Shape the Future to Your Needs

  One of the keys to effectively transforming available runtime data into valuable decisions is to provide a system with means to evaluate future trends and developments. The central idea is to build a predictive model (i. e. a simulation) of application domain dynamics, both from expert knowledge and from data available at system design time. See [2] and [3], e. g., for recent research directions on the matter of transforming available data into applicable predictive models.

  Once an accurate model is available, it can be fed with data gathered at runtime in order to estimate future trends and consequences of system configuration and reconfiguration. Consider a smart factory that is able to reposition its current production machines. A model can be used to evaluate the consequences of different configuration and reorganization decisions, e. g., in terms of time to production, but also in terms of energy cost or any other metric of interest.

  While the ability to drive decisions based on reflection about future consequences enables flexible system reaction to a variety of production requirements not known a priori, it poses a big challenge for quality assurance. We argue that many of the non‐functional requirement assessment activities can be pushed into system runtime, by also exploiting the model and evaluating consequences of system decisions. The difference is only in the metric of interest evaluated when simulating: For example, when a smart factory decides to reorganize its machines to meet current production requirements, one can at the same time assess not only time to production or energy cost, but also whether performance or safety requirements will be met.

  55.2.2 Today’s Decisions Drive Tomorrow’s Opportunities

  In many cases, the choices a system performs at a given moment have consequences for the choices it is able to make in the future. Imagine an autonomous car: Acceleration now yields higher speed in the future, and stopping or steering a fast car is different from doing the same for a slow one. A smart factory that reorganizes its production machines in order to produce items of type A as efficiently as possible may take a long time to reorganize for production of type B items. It is therefore crucial to use available models to evaluate sequences of system decisions. In the example, given a model about potential future requests, this would allow the factory system to identify valuable trade‐off configurations that enable good performance for type A items, while maintaining flexibility to change to production for items of type B.

  Typically, as the model about requests is based on incomplete information (e. g., some recent patterns in orderings), it would be given in a probabilistic form to capture the designers’ uncertainty about the ordering dynamics. See [4] for a scientific discussion of incomplete information, uncertainty and probability. See [5] for a discussion of the relation of model‐free and model‐based decision making, and [6] for a recent survey of sequential decision making under multiple objectives.

  While sequential planning of system decisions is straightforward on a concept level, this approach yields exponential growth of problem space with larger planning horizons (i. e., number of sequential decisions considered). However, we require our systems to make their decisions in time: Decisions have to be made before their estimated evaluation becomes invalidated due to environmental changes and events. For example, consider a smart factory system trying to find an optimal trade‐off configuration for type A/B items (see above), given a particular new ordering situation requiring preference of type A performance, given some probability that in the future type B items will be preferred again. Then, if the preference for type B occurs while the system still optimizes for the previous (now old) requirement, all effort was useless.

  We therefore approach system reconfiguration in an online way: Decision making mechanisms are to be designed in order to constantly be able to produce (nearly) optimal (i. e., good enough) results, given all currently available information and resources (such as computation time until decision). If decision mechanisms are well designed, they are (a) able to exploit available information and resources effectively and (b) always able to return the decision currently being estimated to be optimal.

  55.2.3 Better Good Now than Getting Better Forever

  In order to allow for such resource‐sensitive decision making, we resort to sampling approaches. These estimate decision quality by sampling potential consequences from the model instead of taking into account all potential ones. This ensures scalability of the approach, and often some quality of the estimate can be computed as well (i. e., a confidence in the decision mechanism’s current result).

  While it would be possible to sample the space of potential reconfigurations with respect to an uninformed heuristic (e. g., uniformly random or by grid search), it is more effective to use information generated from sampling the model to drive further simulations. For instance, if a smart factory decision mechanism has found some generally promi
sing direction of reconfiguration, it should distribute available computational resources accordingly. Consider the system has found (by previous sampling) that moving machine X close to machine Y is producing many high quality samples. Then, it should use this configuration as a starting point for further reorganization refinement (e. g., positioning machine Z). However, the system should not stop to investigate completely different, potentially even more promising configurations. In the literature, this dilemma is known as the exploration‐exploitation trade‐off (c.f. [7]), and there exist numerous sample‐based approaches to tackle that trade‐off, which can be readily used for application in modern systems such as a smart factory.

  Fig. 55.1 illustrates the idea schematically: The system incorporates two feedback loops. The first feedback loop describes the systems interaction with the environment in an online manner, based on the current estimate of an optimal strategy. It is depicted on the left. The second feedback loop is of higher frequency than the left one, and captures the influence of previous sampling results on the sampling strategy. It is shown on the right.

  Fig. 55.1Two feedback loops of a simulation‐based adaptive system

 

‹ Prev