Book Read Free

Digital Marketplaces Unleashed

Page 76

by Claudia Linnhoff-Popien


  The DSL@LMU provides an interface for its industry partners to the students of this elite program. For example, it leverages joint projects between industry and students such as Master theses, seminars, use‐cases, etc. supervised by a scholar from the DSL@LMU.

  49.3 Sample Projects and Case Studies

  The DSL@LMU aims at transferring knowledge in the form of latest scientific achievements to industry. This is typically implemented through joint research projects and/or case studies. In these cases, the DSL@LMU serves as an organizational umbrella for such projects (but is not involved in technical details as an entity in order to prevent IP issues which are negotiated with each project partner, separately). In the following, we sketch three sample projects that have been recently conducted under this umbrella.

  49.3.1 Shared E‐Fleet – Efficiently Using Fleets of Electric Vehicles

  Mobility is the backbone of modern society. The economic and ecological challenges it bears are enormous. Ambitious climate goals compete with ever‐growing passenger and freight transport demands. Mastering these challenges without sacrificing driving convenience and efficiency, is the ultimate goal of modern mobility.

  Recent studies reveal hints on how to cope with mobility requirements. The geo‐data enterprise NAVTEQ has shown that drivers using navigation devices increase fuel efficiency by 12%. The McKinsey institute estimates around EUR 550 billion in annual consumer surplus from using personal location data. The biggest share coming from saving fuel and improving efficiency due to navigation systems relying on telematics. The great white hope is new technology, new data. The abundance of sensor data collected in traffic, summarized under the term (vehicle) telematics, is expected to optimize traffic, to reduce congestion and to improve logistic efficiency while at the same time increasing usability and convenience for drivers.

  The Database Systems Group at LMU Munich has recognized this trend and its research opportunities early on. First publications in the field of traffic networks came as a natural extension to the field of spatial and temporal databases, the core topic of the group for which it has gained international renown. Shortly thereafter, members of the DSL@LMU helped to realize a large research project centered on electric mobility which was kicked‐off in 2012. Funded by the German ministry of economics BMWi, the project united research partners with industry partners such as Siemens Corporate Technology, Carano Software Solutions and car technology manufacturer Marquardt GmbH. The project, named Shared E‐Fleet, aimed at creating a comprehensive cloud‐based solution for the efficient use of fleets of electric vehicles (EVs).

  While electric mobility is not uncontroversial, its potential is certainly great. Electric infrastructure exists in all developed countries, electricity is relatively cheap and less prone to fluctuations of the market than oil and its CO2 footprint is believed to be ecologically justifiable. In reality, however, electric mobility faces very basic limitations. The first limitation is the range, which varies between 100 to 150 kilometers. The second limitation are rather long recharging times. While refueling a car takes minutes, a full recharge can take several hours. Both limitations become particularly evident when one vehicle is shared among multiple drivers.

  Over the course of the project, the Database Systems Group streamlined research and development and devised a spatial and temporal query system to counter these drawbacks. Based on real‐time telematics and historic sensor data, the query system supports drivers and fleet managers alike. The system, for instance, informs the fleet manager preventively about critical situations such as expected belated returns or expected exceeded range limits. In addition, the system provides directions to the driver, for instance by computing the most efficient path to the nearest charging station or by visualizing reachable destinations within the given range limit.

  Combined with the hardware and software components provided by the other partners, the query system was brought into action in pilot projects. From June 2014 until September 2015 seven BMW i3 were made available to employees of technology parks in four different German cities. During the test phase, the initial concepts were refined in close cooperation with the partners. The advancements fueled research findings and vice versa, leading to several publications in the international research community. While the query system tackled modern mobility requirements from a practical angle in a real‐world project, the published research addressed them from a theoretic perspective.

  An example: Modern traffic networks are usually modeled as graphs, i. e., defined by sets of nodes and edges. In conventional graphs, the edges are assigned numerical weights, typically reflecting cost criteria like distance or travel time. In so‐called multicriteria networks, the edges reflect multiple, possibly dynamically changing cost criteria. While these networks allow for diverse queries and meaningful insight, query processing usually is significantly more complex. Novel means to keep query processing efficient were developed and published over the course of the project.

  Another example: Modern cars use ultrasonic sensors or cameras to detect parking spaces. By exploring this problem probabilistically, a novel model for the abstract problem of consumable and reoccurring resources in road networks was devised. Based on this model, a stochastic routing algorithm was developed which maximizes the probability of finding the desired resource while minimizing travel time. Besides the original application of parking spaces, the model may for instance be applied to vacant or occupied charging stations for EVs. Inspired by the practical use of sensors, a ubiquitous traffic problem was formalized with the help of probability theory and solved with algorithmic insight.

  It is this fruitful combination of theory and practice which helps understand and solve some of the problems of modern mobility. Where research papers lay the groundwork, demonstrations extend the theory and real‐world projects test the reliability under authentic conditions. Thus, as demonstrated by the project Shared E‐Fleet, cooperation between industry and research yields valuable results for both scopes.

  49.3.2 VISEYE – A New Approach for Mobile Payment

  In a collaboration with a small and medium‐sized company (KMU) from the mobile payment sector, a new system for image mapping and classification on mobile devices was explored at DSL@LMU. As an alternative to existing technology such as QR codes, the main challenge was, that the system needs to be robust against frauds.

  As a result, the system implemented a sequence of seven different algorithmic steps (from image processing through feature extraction to classification) in a distributed client/server setting. In this architecture, it turn out that several parameters are crucial for the efficiency of the process, and thus, decisive for a high consumer acceptance. Depending on the quality of the network and the status of the battery, it is generally favorable to do as many steps of the mobile device as possible. To formalize the allocation of steps of the overall process to the different parties (client: mobile device versus central server), a cost model was developed that decides this allocation of jobs ad hoc, given several parameters such as those mentioned above (battery status, network quality) as well as further potentially influencing parameters such as location and previously learned user preferences.

  This project was also a prototype for a successful collaboration between industry and academia that leverages the transformation of cutting edge research into an innovative product. The commercial use case developed by the KMU lead to novel fundamental research questions, e. g. the development of the cost model and the integration of geo‐coordinates or user preferences were, that had to be solved in theory first. The implementation of these theoretical results in turn tested their accuracy and reliability real‐world scenarios.

  49.3.3 Process Mining – Leveraging Automation and Logistics

  Mid‐ and large scale
businesses offer much possibility to collect lots of data during daily operations. While logging the business actions is rather easy and straight forward, the information is only revealed by analyzing it appropriately. Only transforming business data from simple log files to meaningful models creates valuable assets.

  The emerging field of process mining aims at this problem by combining the research areas of data mining and business process modelling. The great focus is to develop tools to answer the arising questions of today’s managers and decision makers: What happened? Why? What can be expected in the future? And what is the best choice I can take?

  To answer these questions, the raw business logs, which offer timestamped event data, are taken into consideration to infer structural fitting workflow graphs to model the real world.

  For a better imagination we illustrate this in a public transport service scenario (see Fig. 49.2). The sensors to log most data is already present nowadays as nearly everybody carries a smartphone and uses the mobile applications of their local transportation service provider. The collected events contain waiting times, vehicle entries and exits. A simple analysis can certainly identify choke points or idle spots for some routes. But process mining offers even better and more intelligent methods to analyze connections between parts of the processes. These dependencies are common, but not always obvious for humans to identify in complex networks.

  Fig. 49.2Two buses operate in the same area. The blue bus has a high probability to stop at the traffic light controlled junction. That causes the red bus to overtake the other one most of the time. The red bus arrives first at the common bus stop. It will pass a second crowded stop before driving to the final bus stop. The blue bus, being the late vehicle, will be almost empty on this route. Analyzing the log files will show the reason why many people gave negative feedback in this area regarding overcrowded buses

  A bus, which has a long stop time at a traffic light, might cause an even longer customer waiting time at the next bus stop. The people there will react, choosing maybe another bus line. Another line will be overcrowded, causing delays on other lines and so on. Most of these occurrences will weaken through the network, but in certain cases, a small influence in one part can cause a jam of the network in a distant part. Process mining assists in detecting those unobvious dependencies and finding solutions to prevent issues.

  So analysis of historical data is used to reveal dependencies in graphs like workflow nets, which give insights on causal connections of different actions in the process. Using this knowledge can augment the decision making in future process instances. In our particular example, bus frequencies can be reduced or increased, or lines can be rerouted to circumvent difficult spots.

  At DSL@LMU, we strongly consider changing temporal dependencies in processes. Drifts in service times or increasing production cycle times are often an important indicator for a necessary intervention of a supervisor. Many tools start working slower until they break completely. In hazardous environments, the critical parts should be replaced beforehand to save people. To achieve this goal, we analyze process events, extract temporal information and analyze shifts to identify and categorize the type of changes. Our next step involves the analysis of dependencies between delays in complex business workflow nets. For example can a problem in action A cause a delay in action B, although both actions seem to be rather independent? If A is only a minor action in the whole process, but B is rather significant, the benefit of this knowledge would be tremendous. A small delay of a secondary resource, which has been overseen, might cause production delays. These total delays can lead to contractual penalties everybody wants to avoid. Identifying the source of the problem, we could fix the problem at a point that allows a simple interference with low costs.

  In addition to these timely problems we are highly interested in event streams as process logs grow larger and larger recently. Process analysis methods have to be time and memory performant to deal with frequently arriving events on the one hand and being able to return results within shorter reaction times on the other hand.

  49.4 Conclusions

  The newly established Data Science Lab at the Ludwig‐Maximilians‐Universität Munich (DSL@LMU) is an interface between academia and industry open for innovative corporate partners. It offers a platform for various joint activities between academia and industry providing access to cutting‐edge know how in analyzing data, strong visibility among academia and industry, as well as a link to the highly talented Data Science students from LMU including those from the new elite Master program “Master of Data Science” at the LMU Munich.

  Contact: http://​dsl.​ifi.​lmu.​de/​data-science-lab

  © Springer-Verlag GmbH Germany 2018

  Claudia Linnhoff-Popien, Ralf Schneider and Michael Zaddach (eds.)Digital Marketplaces Unleashedhttps://doi.org/10.1007/978-3-662-49275-8_50

  50. Diagnosis as a Service

  Franz Wotawa1 , Bernhard Peischl1 and Roxane Koitz1

  (1)Technische Universität Graz, Graz, Austria

  Franz Wotawa (Corresponding author)

  Email: wotawa@ist.tugraz.at

  Bernhard Peischl

  Email: bpeischl@ist.tugraz.at

  Roxane Koitz

  Email: rkoitz@ist.tugraz.at

  50.1 Introduction

  One aim of Industry 4.0 is to further automatize production as a whole via enhancing data exchange and flexibility in manufacturing in order to reduce costs and to allow a stronger customization of products accordingly to the users’ needs. In order to increase flexibility of production for mass customization there is a strong need for optimization, configuration and diagnosis of all systems involved in manufacturing (see [1]).

  A production facility has to react on different strongly customized orders coming in at real time. In case of faults occurring during manufacturing, the underlying cause has to be identified and the whole production process has to be reconfigured. In addition, production should be optimized during operation to allow fast delivery of products using lesser resources.

  In this chapter we focus on diagnosis for Industry 4.0. In particular, we discuss the underlying requirements and afterwards present a methodology that serves this purpose. Diagnosis itself is the activity or task dealing with the identification of the underlying reason for a certain observed deviation from expected behavior of a system. Such a deviation is usually referred to as a symptom and diagnosis is for localizing the underlying root cause, which is later used for applying treatments in order to bring a system back into its desired ordinary state. For example, in medicine humans can be seen as systems. If a patient comes to a medical doctor with certain symptoms such as fever and headache, the doctor comes up with more or less hypotheses about the underlying disease. Via obtaining more information about the health state of the patient, those hypotheses are further reduced and finally in an ideal world there is a root cause, which serves as bases for further medical treatments. This described situation is the same for technical systems where diagnosis is one of the tasks of system maintenance.

  Is diagnosis in the context of Industry 4.0 different from ordinary system diagnosis? The answer here is yes. Due to the required flexibility of manufacturing leading to systems that adapt themselves accordingly to certain demands, the structure of the system changes over time. For example, in one production step a certain work piece passing a drilling machine might be fed directly to a paint shop, whereas another work piece might go to further mechanical treatment before. Hence, when quality criteria of a work piece are not met, we have to know the responsible structure of the system and the behavior of the involved machine tools. Therefore, there is the requirement of having a diagnosis method that can adapt itself on structural changes without human intervention.

  In addition, to this general requiremen
t there is a need for having such a general diagnosis method easily accessible ideally without requiring a deep understanding of the underlying foundations. One idea in this direction is the concept of providing software as a service to the general public [2]. The Software as a Service (SaaS) concept has been intended for providing the software’s functionality to customers without requiring them to think about deployment and maintenance. In SaaS the ownership of the software remains on side of the developer and only its use is granted to the customer. During the past years SaaS has moved in the direction of Everything as a Service (XaaS) [3] extending this idea to any potential services that can be provided using hardware and software.

  SaaS and XaaS have gained more and more importance. Besides classical and popular cloud services for providing computing power or storage capacity, there is a growing interest in services for editing documents among others. Google documents is an example for this development and there are many more. For bringing diagnosis into practice and also increasing accessibility to diagnosis functionality, providing diagnosis services over the Internet would be required.

  Unfortunately, this is not easy because diagnosis requires the knowledge about the system, for example, its structure and behavior, or other types of knowledge facilitating diagnosis. This is similar to a text editor, which requires documents comprising the textual information. For diagnosis, we need the documents describing diagnosis knowledge in a machine‐readable form.

 

‹ Prev