Good Practices in Data Management

Best Data Management Practices from the CDQ Good Practice Award

Process Automation CDL+ // "Cinderella"

  • Jens-Peter Henriksen, Jens Greiner (Bayer AG)
  • 2020

A novel approach to automated decision making in Vendor Master Data Management

Bayer's good practice presents a forward-looking approach to automate master data workflows and manage the trade-offs between data quality, risks and manual efforts in vendor master data management.

Bayer's CDL+ framework builds on a semantic knowledge graph and more than 1,500 Data Quality Rules, defined in the Data Sharing Community, and combines them with Bayer-specific rules. These executable business rules and the validation with external data allow an instant-risk-based approval of master data requests instead of 24h service levels.

The initial pilot scope has achieved a considerable automation rate as well as several secondary benefits in the area of data quality and documentation of data-related knowledge. The utilization of the framework translates into a monetary business case. Moreover, it provides a future blueprint for other data objects (….) and presents a for system enabled Data Governance covering many aspects of the CDQ Data Excellence Model.

Linking Cause and Effect in Data Quality with Machine Learning

  • Alan Hendrickx, James Whale (Deusche Bank AG)
  • 2020

Self-Learning DQ Measurement Tool

In 2017 Deutsche Bank's Chief Data Office (CDO) built and implemented a Data Quality Management process called DQ Direct. This is now an enterprise wide application that captures data quality issues and manages them through to remediation, providing insightful analytics to assist the bank's divisions improve data. This has won a number of international data awards and was a finalist in the 2017 CDQ Good Practice Awards. DQ Direct was initially used to capture data quality issues that were identified during the control process, usually at the end of the production cycle.

The next challenge was how to identify DQ issues as early as possible in the process and avoid the cost and latency from retrospective identification. In response to this challenge, the Chief Data Office has developed 'Auto-DQ'.

Auto-DQ uses machine learning to predict relevant DQ rules by looking at patterns in the data over multiple time periods. Auto-DQ has allowed Deutsche Bank to connect Data Quality Rules to data quality issues. This means we have the ability to evidence that a data remediation has been effective through improved rule scores. This gives complete end to end connectivity between an objective business rule and a business process consequence.

Data excellence in the product innovation process

  • Dr. Jens-Uwe Wüstling, Torge Orthmann (Beiersdorf AG)
  • 2020

Data excellence in the product innovation process

Beiersdorf, a globally leading provider of innovative, high-quality skin care products, presents its first Master Data app and the first self-developed Fiori App together with its IT service provider Beiersdorf Shared Services (BSS). The product innovation process is one of the most important success factors for Beiersdorf. The Fiori app helps Supply Chain Project Leaders to coordinate all functions in the execution phase of this process quickly and efficiently in sourcing, production and logistics. It provides a high level of transparency throughout the entire execution phase and significantly improves the user experience.

The Supply Chain Project Leaders can easily track the status of the respective Master Data in their projects in real time. For the development of the app, the team used Kanban and Design Thinking as agile project management methods, supported by Build.Me as screen development technique to ensure a very high user friendliness. The app is part of the Master Data Management program at Beiersdorf, a company-wide initiative to bring more simplicity to Beiersdorf's system setup. It strives to establish a uniform and integrative Master Data system for all processes. In order to make life easier for all Master Data users and consumers, a high level of user acceptance and thus a significantly better user experience is the clear goal.

Data Management in all Data Areas

  • Markus Rahm, Achim Gooren, Sabrina Karwatzki (Schaeffler Technologies AG & Co. KG)
  • 2019

Professional Data Management in 47 Data Domains

The global supplier to the automotive and industrial sectors is helping shape the rapid developments taking place worldwide as part of Mobility for Tomorrow including innovative and sustainability solutions for E-mobility and Industry 4.0. Professional data management is a key success factor in these increasingly complex conditions, as data is strongly linked to business value.

Schaeffler Corporate Data Management, the CDQ Good Practice Award winner from 2016, implemented the project “Data Domain Management in all Data Areas” to support the digital transformation process. The goal was to guarantee high standards of data quality as a foundation for future business opportunities, advanced analytics, efficient processes and to ensure information security needs with a group-wide data responsibility.  The scope of the project was huge and covered all types of data from all business functions and divisions.

The corporate data management experts defined and analyzed all data areas. The responsible data domain managers for each of the 47 data domains have been nominated with the help of top management. Afterwards, they were trained and consulted. As a result, Schaeffler observed several business improvements. For example, the data quality in the plants’ supply chain increased significantly.

The Data Domain Management Project also contributed to a cultural change. Today, data management at Schaeffler is considered a discipline which is executed in all business functions and divisions; not just by a master data expert team in the IT department. The awareness, transparency and knowledge regarding data was raised substantially.

Industrialized machine learning in high-volume production

  • Robert Prager, Marco Keith, Dr. Tobias Windisch, Andreas Huditz (Robert Bosch GmbH)
  • 2019

Using AI to Become a Digital Factory

"The Robert Bosch plant in Blaichach/Immenstadt produces brake control systems, fuel injection components and sensors for autonomous driving. In 2017, 6.5 million have been produced in the Blaichach/Immenstadt plant. The plant has a clearly defined strategy of becoming a «digital factory». As Bosch already has a complete digital version of the end-to-end process of material and data of all products, all IT systems have interfaces to the systems of suppliers and customers. Now, Bosch aims at making better use of knowledge in the areas of plant engineering, IT and plant operation, targeting a significant annual productivity increase with the help of AI.

The operational excellence team at the Blaichach/Immenstadt plant started with the implementation of a first AI use case for functionality checks of the ABS/ESP parts at the final assembly. In the past, each part had to be checked up to 4 times until the result was determined. Now, the decision is made by a neuronal net and processed directly on the machine, which eliminated repeated checks at the test bench and resulted in thousands of euros in savings.

The Bosch team works according to the CRISP-DM standard; for them, collaboration and cross-functional teams have been a key success factor. The data scientists work on analyzing the data in very close cooperation with process specialists. Even during the modeling phase, there is an extensive exchange with automation technicians, as they will intervene in the course of production based on the decisions of the AI. To train a single AI model is easy, but to maintain and deploy thousands of models is not! The Bosch team has set the foundation for thousands of other use cases they want to implement. Currently, they are working on the next ones with the clear goal to industrialize AI.

Driving Efficiency in Data Management

  • Thomas Ruhl, Christopher Reimann, Julian Blasch, Bastian Finkel, Vika Venugopal, Martin Stocker (SAP SE)
  • 2019

Driving Efficiency in Data Management with Robotic Process Automation

The leading provider of enterprise software is applying robotic process automation (RPA) to drive operational efficiency in its shared services centers for data management. Furthermore, RPA helps improve the employee experience and ensures trusted data for SAP as a company.

The data strategy & operations team of SAP followed a systematic approach of running a pilot to drive adoption of this new practice within their organization. This resulted in identifying, assessing and prioritizing more than 20 RPA use cases in data management. They have already implemented 5 use cases with tangible business results. Some of these cases achieved up to 800% process efficiency gains, data quality improvements and significantly better user experience. The team went one step beyond the initial goal by showcasing how to orchestrate RPA and human intelligence into an intelligent workplace of the future for their data agents. By the end of the project, they have turned it into a program aimed at establishing a center of expertise surrounding RPA for data management.

Data Quality Assessment Method (DQAM)

  • Signe Horn Thomsen, Pepe Wadsholt Herrera Julio (Grundfos)
  • 2019

Considering Data Requirements as Product Requirements

The world's largest pump manufacturer considers data assets as key success factors for their ongoing digital transformation. The data quality of their pumps’ IoT/streaming data is essential for good customer experience, to accelerate digital offers and as a foundation for analytics. At Grundfos, data specialists now convert specific data demands into actual product requirements early in the development process to proactively improve data quality and, thereby, the product.

During product development, they focus on what data will be needed from a data science perspective. The data quality assessment method scores the data sets of a new product based on four dimensions. The first dimension is data definition which evaluates the meta data. The second category is data quality which assesses the quality of the data sets according to the previously defined requirements, e.g., completeness or consistency. The third category is data availability which scores how FAIR (findable, accessible, interoperable, reusable) the data set is. The last dimension is data documentation which evaluates how raw and processed data is documented.

The resulting report points out clear recommendations for how to improve the data quality before the product will enter the «prepare production» phase. Besides better IoT/streaming data, this initiative also helped improve the awareness of data quality within the company and to establish a common language between business experts and data scientists.

The Deutsche Telekom Chief Data Office Start-Up Program

  • Susan Wegner, Zuzana Krifka Dobes, Roland Schwaiger, Dr. Christian Weiss (Deutsche Telekom AG)
  • 2018

The leading integrated telecommunication company set up a “Chief Data Office” (CDO). The purpose of this new business unit is to open up data silos and to foster analytical skills within the company. A key priority is to identify and implement use cases which generate business value from vast sources of data.

One major challenge is the mindset shift regarding how data is treated within the company. To address these issues, the CDO organised its activities into four key areas: (1) Implementation of big data use cases and development of guidelines to standardise these use cases in order to streamline them for faster re-use. (2) Development of the architecture and the "T-Data model" for promoting a shared understanding of the key data entities inside the firm. (3) Creation of a CDO portal to allow the community to exchange and collaborate. (4) Development of a data governance blueprint focusing on transparency, quality and privacy.

The chief data office has been successful in creating building blocks toward its overall data vision. It established a common vocabulary and mindset on use cases, data analytics and data governance topics, increased the visibility of available data assets and made it easier to transfer use case assets as well as to exploit them for new use cases.

Combining Defensive & Offensive Data Strategies to Transform a Traditional FMCG Company into a Data Driven Enterprise

  • Eric Daly, Alice Vaskova, Amelie de Lamaze (Philipp Morris International)
  • 2018

The leading tobacco company set up an Enterprise Analytics and Data (EAD) business unit in 2017 to support the fundamental transformation of its business model and product portfolio towards creating a "smoke-free future". Philip Morris International sees data as a key engine in the acceleration of this business transformation. The novelty of the EAD programme is in the convergence of data governance and data science, which enables the company to overcome the opposition traditionally created between defensive and offensive data strategies.

In the last eighteen months, the team identified more than 40 data owners and 200+ business/data experts. In parallel, they implemented an enterprise-wide data governance repository platform to capture and manage metadata as well as to facilitate interactions between data owners. They registered over 2,300 business terms, metrics and KPIs, over 700 data entities, over 5,200 data attributes and over 9,500 reference data values. More than 27 conceptual and 54 logical data models have been mapped. Additionally, a data lake called "PMI Data Ocean" has been set up as the enterprise-wide platform for data analytics and data science delivering a cross-functional and multi-functional data repository. The team took special care in “data privacy by design” processes being embedded into their operational deliveries as well as analytical use cases and the design of digital solutions. Philip Morris International and the Competence Center Corporate Data Quality have published a case study about this innovative approach.

Predicting the tariff code of a material master using artificial intelligence

  • Klaus Pfreundner, Valk Flegl, Georg Hinselmann (Robert Bosch GmbH)
  • 2018

The leading global supplier of technology and services started to use supervised machine learning algorithms to predict product tariff numbers with high accuracy. This innovative solution is a great example of how artificial intelligence can greatly improve the quality of services while simultaneously reducing costs, since the company can rely on high data quality.

Manually assigning tariff numbers to a product is a time-consuming procedure. For foreign trade, each company must classify its products as a pre-requisite for export/import processes. As a result of this innovative solution, the service regarding quality, speed and accuracy of the shared classification service has been significantly improved. The enabler for this innovative solution is the combination of high-quality material master data and the availability of a standardized global tariff code classification process at Bosch.


Good Practice Award

The CDQ Good Practice Award acknowledges outstanding, innovative projects in the field of data management. The practices submitted are evaluated by an international jury of data management experts and by the member organizations of the CC CDQ. CDQ Good Practice Award
Go to top