Prof. Dr. rer. nat. habil. Gunter Saake

Bild von Gunter Saake
Arbeitsgruppenleiter

Prof. Dr. rer. nat. habil. Gunter Saake

Fakultät für Informatik (FIN)
Institut für Technische und Betriebliche Informationssysteme (ITI)
Gebäude 29, Universitätsplatz 2, 39106 Magdeburg, G29-110
Sprechzeiten: nach Vereinbarung per E-Mail: anja.buch@ovgu.de

Gunter Saake was born in 1960 in Göttingen, Germany. He received the diploma and the Ph. D. degree in computer science from the Technical University of Braunschweig, F. R. G., in 1985 and 1988, respectively. From 1988 to 1989 he was a visiting scientist at the IBM Heidelberg Scientific Center where he joined the Advanced Information Management project and worked on language features and algorithms for sorting and duplicate elimination in nested relational database structures. In January 1993 he received the Habilitation degree (venia legendi) for computer science from the Technical University of Braunschweig. Since May 1994, Gunter Saake is full professor for the area Databases and Information Systems at the Otto-von-Guericke-University Magdeburg. From April 1996 to March 1998, he was dean of the faculty for computer science at the Otto-von-Guericke-University Magdeburg and was again elected as Dean in 2012. 

Gunter Saake participated in a national project on object bases for experts and in the European BRA working groups IS-CORE, ModelAge, ASPIRE and FIREworks. His research interests include conceptual design of data base applications, query languages for complex data base structures and languages, semantics and methodology for object-oriented system specification and application development in distributed and heterogeneous environments. He is member of ACM, IEEE Computer Society, GI and of the organization committee of GI AK `Foundations of Information Systems'. Besides being author and co-author of scientific publications, he is author of a book "Object-oriented Modelling of Information Systems", co-author of four lecture books on Database Concepts, a book on Java and Databases, a lecture book on Object Databases and a book on Building efficient applications using Oracle8 (all in German).

Current projects

A Common Storage Engine for Modern Memory and Storage Hierarchies
Duration: 01.10.2022 to 30.04.2026

Scientific research is increasingly driven by data-intensive problems. As the complexity of studied problems is rising, so does their need for high data throughput and capacity. The globally produced data volume doubles approximately every two years, leading to an exponential data deluge. This deluge then directly challenges database management systems and file systems, which provide the foundation for efficient data analysis and management. These systems use different memory and storage devices, which were traditionally divided into primary, secondary and tertiary memory. However, with the introduction of the disruptive technology of non-volatile RAM (NVRAM), these classes started to merge into one another leading to heterogeneous storage architectures, where each storage device has highly different performance characteristics (e.g., persistence, storage capacity, latency). Hence, a major challenge is how to exploit the specific characteristics of memory devices.
To this end, SMASH will investigate the benefits of a common storage engine that manages a heterogeneous storage landscape, including traditional storage devices and non-volatile memory technologies. The core for this storage engine will be B-epsilon-trees, as they can be used to efficiently exploit these different devices. Furthermore, data placement and migration strategies will be investigated to minimize the overhead caused by transferring data between different devices. Eliminating the need for volatile caches will allow data consistency guarantees to be improved. From the application side, the storage engine will offer key-value and object interfaces that can be used for a wide range of use cases, such as high-performance computing (HPC) and database management systems. Moreover, due to the widening gap between the performance of computing and storage devices as well as their stagnating access performance, data reduction techniques are in high demand to reduce the bandwidth requirements when storing and retrieving data. We will, therefore, conduct research regarding data transformations in general and the possibilities of external and accelerated transformations. As part of SMASH, we will provide a prototypical standalone software library to be used by third-party projects. Common HPC workflows will be supported through an integration of SMASH into the existing JULEA storage framework, while database systems can use the interface of SMASH directly whenever data is stored or accessed.

View project in the research portal

Learning Adaptivity in Heterogeneous Relational Database Systems (LARDS)
Duration: 01.04.2022 to 01.04.2026

With the ever-increasing heterogeneity of hardware, the database community is tasked with adapting to the new reality of diverse systems with a rich set of different architectures, capabilities and properties.
The traditional workflow of hand-tuning implementations to the underlying hardware, for peak performance, is commonly considered untenable for an ever-growing variety of hardware with different performance characteristics. Systems like Micro-Adaptivity in Vectorwise or HAWK have been studied as solutions, but their adoption remains limited.
This project aims to explore solutions for a fully adaptive query execution engine and techniques that allow for simple adoption. To achieve this goal, we plan to tackle four problems.
At first, investigate on how to build micro-optimizations into a hardware-oblivious query pipeline in an efficient and simple-to-maintain way, while still offering a large optimization space. Afterwards, we investigate how to select the best optimizations automatically and in an on-the-fly adapting way, depending on the query and hardware properties.
As a third step, we investigate on the integration of the previous research results into a traditional query execution pipeline and query plan generation.
In the last phase of the project, we will explore techniques that can be used to augment the demonstrator with OLTP capabilities and introduce micro-optimizations into transaction processing.

View project in the research portal

Completed projects

Compositional Feature-Model Analyses
Duration: 01.01.2021 to 01.01.2026

Feature modeling is widely used to systematically model features of variant-rich software systems and their dependencies. By translating feature models into propositional formulas and analyzing them with solvers, a wide range of automated analyses across all phases of the software development process become possible. Most solvers only accept formulas in conjunctive normal form (CNF), so an additional transformation of feature models is often necessary.
In this project, we investigate whether this transformation has a noticeable impact on analyses and how to influence this impact positively. We raise awareness about CNF transformations for feature-model analysis and mitigate it as a threat to validity for research evaluations to ensure reproducibility and fair comparisons. Furthermore, we investigate other steps in the feature-model analysis process, their alternatives, and their interactions; for instance, we study the potential and impact of knowledge compilation, interfaces, slicing, and evolution on feature-model analyses.
Our vision for this project is to lay a foundation for a compositional feature-model analysis algebra; that is, to understand how complex analyses are made of simple parts, how they can be re-assembled, and how those parts interact with each other.

View project in the research portal

Optimizing graph databases focussing on data processing and integration of machine learning for large clinical and biological datasets
Duration: 01.12.2021 to 30.04.2025

Graph databases are an efficient technique for storing and accessing highly linked data using a graph structure
linked data using a graph structure, such as links between measurement data on environmental parameters or clinical patient data. The flexible node structure makes it easy to add the results of different examinations. This ranges from simple blood pressure measurements to the latest CT and MRI scans to high-resolution omics analyses (e.g. of tumor biopsies, gut microbiome samples). However, the full potential of data processing and analysis using graph databases in biological and clinical applications is not yet fully exploited. In particular, the huge amount of interconnected data that needs to be loaded, processed and analyzed leads to processing times that are too long to be integrated into clinical workflows. To achieve this goal, novel optimizations of graph operators as well as a suitable integration of analysis approaches are necessary.
This project aims to solve the above problems in two directions: (i) proposing suitable optimizations for graph database operations, also using modern hardware, and(ii) integrating machine learning algorithms for easier and faster analysis of biological data. For the first direction, we investigate the state of the art of graph database systems and their storage as well as their processing model. We then propose optimizations for efficient operational and
operational and analytical operators. For the second direction, we envision bringing machine learning algorithms closer to their data providers - the graph databases. For this purpose, in a first step, we feed the machine learning algorithms directly with the graph as input by designing suitable graph operators. In a second step, we integrate the machine learning directly into the graph database by adding special nodes that represent the model of the machine learning algorithm.
The results of our project are improved operators that utilize both modern hardware and integration concepts for machine learning algorithms. Our generally developed approaches will advance the processing and analysis of huge graphs in a plethora of use cases beyond our targeted use case of biological and clinical data analysis.
This text was translated with DeepL

View project in the research portal

Unveiling the Hidden Gems: Exploring Unexpected Rare Pattern Mining in Data
Duration: 20.08.2018 to 31.03.2025

Pattern mining is the task of finding statistically relevant patterns in data that can provide valuable insights and knowledge. However, most existing pattern mining methods use a single threshold to determine the frequency of the patterns, which may not reflect the diversity and specificity of the data items. This may lead to two problems: (1) if the threshold is too low, it may generate too many patterns, many of which are redundant or uninteresting; (2) if the threshold is too high, it may miss some patterns, especially the rare ones that occur infrequently but have high significance or utility.

The rare pattern problem is a challenging and important issue in pattern mining, as rare patterns may represent unknown or hidden knowledge that can inform and inspire various domains and applications, such as medical diagnosis, fraud detection, or anomaly detection. Several studies have attempted to address this problem by mining frequent patterns, including rare ones, using different minimum item support thresholds (MIS) for each item. This approach can generate a complete set of frequent patterns without losing any significant ones. However, this approach is also very costly and inefficient, as it may still produce many redundant or useless patterns that consume a lot of time and memory.

The primary objective of this project is to enhance an efficient and effective method for mining rare patterns, without generating the complete set of frequent patterns. The method is based on frequent closed itemset mining, which is a technique that can reduce the number of patterns by eliminating those that are included in other patterns with the same frequency. The method also aims to avoid generating a large number of rules, and instead, to discover only those rules that are rare and generate more actionable insights. Therefore, the method can mine only the most interesting patterns, which are those that are rare, closed, and have high utility or significance. The method can be applied to various data sets and domains, such as health data, where rare patterns may represent rare diseases, hidden connections, or complex interactions. The project aims to evaluate the performance and quality of the method, and to compare it with other existing methods for rare pattern mining. The project also aims to demonstrate the usefulness and impact of the method, and to show how it can discover novel and intriguing patterns that can drive meaningful change.

View project in the research portal

Optimizing graph databases focussing on data processing and integration of machine learning for large clinical and biological datasets
Duration: 01.12.2021 to 30.11.2024

Graph databases provide an efficient technique for storing and accessing highly complex data.
The flexible node structure makes it easy to add results from different examinations. The flexible node structure makes it easy to add the results of different examinations. This ranges from simple blood pressure measurements to the latest CT and MRI scans to high-resolution omics analyses (e.g. of tumor biopsies, gut microbiome samples). However, the full potential of data processing and analysis using graph databases is not yet fully exploited in biological and clinical use cases, especially the huge amount of interconnected data that needs to be loaded, processed and analyzed, which leads to long processing times to be integrated into clinical workflows. To achieve this goal, novel optimizations of graph operators as well as a
suitable integration of analysis approaches is necessary.
This project aims to solve the above-mentioned problems in two directions: (i) Proposal
suitable optimizations for graph database operations, also using modern hardware, and(ii) integration of machine learning algorithms for easier and faster analysis of biological data. For the first direction, we investigate the state of the art of graph database systems
and their storage as well as their processing model. We then propose optimizations for efficient operational and analytical operators. For the second direction, we envision bringing machine learning algorithms closer to their data providers - the graph databases. For this purpose, in a first step, we feed the machine learning algorithms directly with the graph as input by designing suitable graph operators. In a second step, we integrate the machine learning directly into the graph database by adding special nodes that represent the model of the machine learning algorithm. The results of our project are improved operators that utilize both modern hardware and integration concepts for machine learning algorithms. Our generally developed approaches will advance the processing and analysis of huge graphs in a plethora of use cases beyond our targeted use case of biological and clinical data analysis.
This text was translated with DeepL

View project in the research portal

Our aim is to develop new processing concepts for exploiting the special characteristics of hardware accelerators in heterogeneous system architectures for classical and non-classical database systems. On the system management level, we want to research alternative query modeling concepts and mapping approaches that are better suited to capture the extended feature sets of heterogeneous hardware/software systems. On the hardware level, we will work on how processing engines for non-classical database systems can benefit from heterogeneous hardware and in which way processing engines mapped across device boundaries may provide benefits for query optimization. Our working hypothesis is that standard query mapping approaches with their consideration of queries on the level of individual operators is not sufficient to explore the extended processing features of heterogeneous system architectures. In the same way, implementing a complete operator on an individual device does not seem to be optimal to exploit heterogeneous systems. We base these claims on our results from the first project phase where we developed the ADAMANT architecture allowing a plug & play integration of heterogeneous hardware accelerators. We will extend ADAMANT by the proposed processing approaches in the second project phase and focus on how to utilize the extended feature sets of heterogeneous systems rather than how to set such systems up.
Duration: 01.01.2021 to 31.12.2023

Heterogeneous system architectures consisting of CPUs, GPUs and FPGAs offer a wide range of optimization options compared to purely CPU-based systems. To fully exploit this optimization potential, however, it is not enough to transfer existing software concepts unchanged to non-von Neumann architectures such as FPGAs. Rather, the additional processing possibilities of these architectures require the design of new processing concepts. This must already be taken into account when planning the query processing. In the first project phase, we already developed an initial concept for this, which takes into account the device-specific features in our Plug'n'Play architecture. However, we see the need to develop it further in order to achieve even better utilization of the specific characteristics of the hardware architectures. For the second project phase, we therefore hypothesize that known methods for mapping requests at the level of individual operators are not sufficient to exploit the extended processing possibilities of heterogeneous system architectures.
Our goal is therefore to explore novel processing concepts and methods for mapping queries for heterogeneous systems that deviate from the commonly used granularity at the level of individual operators. We will develop processing units that provide greater functionality than individual operators and span multiple devices. These processing units are heterogeneous in themselves and combine the specific characteristics of individual architectures. As a result, our heterogeneous system architecture enables the provision of database operations and functions that are not available or cannot be realized efficiently in classic database systems.
For demonstration purposes, we have identified three use cases that can benefit greatly from heterogeneous system architectures: High-volume data stream processing, approximate query processing and dynamic multi-query processing. High-volume data streams require a hardware architecture that allows the data to be processed without prior buffering. FPGAs are a promising platform for this due to their data stream-based processing principle. In addition, both FPGAs and GPUs are suitable for approximate query processing, as they enable arithmetic operations with reduced accuracy and the realization of approximate, hardware-accelerated sampling techniques. Dynamic multi-query processing is very demanding from a system perspective, as variable system loads can reduce the efficiency of previously established query plans. Here, the numerous levels of parallelism in heterogeneous systems enable a better distribution of system loads.
This text was translated with DeepL

View project in the research portal

A ranking-based automated approach for supporting Literature Review research methodologies.
Duration: 01.07.2020 to 30.06.2023

Literature reviews in general are methodologies of research which aim to gather and evaluate available evidence regarding a specific research topic. A common scientific method for performing this literature reviews is Systematic Literature Review (SLR). Another method is called Systematic mapping study (SMS). Their process if conducted manually can be very time and effort consuming. Therefore, multiple tools and approaches were proposed in order to facilitate several stages of this process. In this PhD thesis, we aim to evaluate the quality of these literature reviews studies using combined aspects. We measure the quality of the study`s included primary selected papers by combining social and academic Influence in a recursive way. Additionally, we will apply a machine learning ranking model based on a similarity function that is built upon bibliometrics and Altmetrics quality criteria and full text relevancy. In order to achieve the proposed approach, we begin with investigating the current state of the art in different directions, mainly the most effective and commonly used quality measures of publications, Altmetrics, Bibliometrics and machine learning text related techniques. A method for assessing the quality of these literature reviews research methods, would definitely be useful for the scientific research community in general, as It would save valuable time and reduce tremendous required effort.

View project in the research portal

Digital programming in a team - Adaptive support for collaborative learning
Duration: 01.03.2020 to 28.02.2023

Collaborative programming is a core component of everyday professional life in computer science. These complex processes on a technical and social level are often treated abstractly in computer science studies and play a subordinate role in specialist concepts for learning programming. In the context of group work, learners have to organize, coordinate and regulate their learning processes - cognitively demanding activities. In order to exploit the potential of collaborative forms of learning for learning programming languages and promoting social skills, learners must receive didactic support as required, both before and during the learning process. In the DiP-iT-OVGU sub-project, we will develop and evaluate a digital subject concept for collaborative programming learning on the basis of empirical studies, supported by the project partners, which contains relevant (media) didactic approaches. In doing so, we aim to enable transfer to other universities. At the information technology level, a process model will be developed for this purpose that enables the reusability of research data and the transferability of data models (e.g. for adaptive didactic support) to other courses or teaching-learning systems. The sub-project is part of the overall project with the following objectives:

  • Analysis and systematization of attitudes and previous experiences of the actors,
  • Development of conceptual, media-didactic criteria for the integration of collaborative programming learning in courses,
  • Development of suitable teaching-learning scenarios and creation of a corresponding digital subject concept,
  • empirical foundation through formative and summative evaluation,
  • Investigation of the effectiveness of forms of instructional guidance based on the needs of learners,
  • Supporting the transfer of findings, in terms of content and technology
  • .

This text was translated with DeepL on 28/11/2025

View project in the research portal

DiP-iT:Digitales Programmieren im Team
Duration: 01.02.2020 to 31.01.2023

Das kollaborative Programmieren ist Kernbestandteil des beruflichen Alltags in der Informatik. Diese auf einer technischen und sozialen Ebene komplexen Vorgänge werden im Informatikstudium oftmals abstrakt behandelt und spielen in Fachkonzepten zum Programmierenlernen eine untergeordnete Rolle. Im Rahmen von Gruppenarbeiten müssen sich die Lernenden organisieren, koordinieren und ihre Lernprozesse regulieren - kognitiv anspruchsvolle Tätigkeiten. Um das Potential kollaborativer Lernformen für das Erlernen von Programmiersprachen und die Förderung sozialer Kompetenzen ausschöpfen zu können, müssen die Lernenden bei Bedarf didaktische Unterstützung erhalten, sowohl vor dem als auch während des Lernprozesses. Im Teilprojekt DiP-iT-OVGU werden wir - unterstützt durch die Projektpartner - auf der Basis empirischer Studien ein digitales Fachkonzept zum kollaborativen Programmierenlernen entwickeln und evaluieren, welches diesbezügliche (medien-)didaktische Ansätze enthält. Dabei zielen wir auf die Ermöglichung des Transfers an andere Hochschulen. Auf informationstechnischer Ebene wird hierfür ein Prozessmodell entwickelt, das die Nachnutzbarkeit von Forschungsdaten und die Übertragbarkeit von Datenmodellen (z.B. zur adaptiven didaktischen Unterstützung) in andere Lehrveranstaltungen bzw. Lehr-Lernsysteme ermöglicht. Das Teilprojekt ordnet sich in das Gesamtprojekt mit folgenden Zielstellungen ein:

  • Analyse und Systematisierung von Einstellungen und Vorerfahrungen bei den Akteuren,
  • Entwicklung konzeptioneller, mediendidaktischer Kriterien für die Einbindung kollaborativen Programmierenlernens in Lehrveranstaltungen,
  • Entwicklung geeigneter Lehr-Lern-Szenarien und Erstellung eines diesbezüglichen digitalen Fachkonzepts,
  • empirische Fundierung durch formative und summative Evaluation,
  • Untersuchung der Effektivität von Formen der instruktionalen Anleitung angelehnt an die Bedarfe der Lernenden,
  • Unterstützung des Transfers der Erkenntnisse, inhaltlich und technisch.

View project in the research portal

Query Acceleration Techniques in Co-Processor-Accelerated Main-memory Database Systems
Duration: 31.08.2019 to 31.03.2022

The project addresses the current focus of analyses in main memory databases on modern hardware: heterogeneity of processors and their integration into query processing. Due to the multitude of optimizations and variants of algorithms and unlimited number of use cases, creating the perfect query plan is almost impossible.
The aim of the habilitation is (1) to establish a comprehensive catalog of promising algorithm variants, (2) to achieve an optimal selection of variants in the course of the higher-level query optimization, (3) and to achieve load balancing in the co-processor-accelerated system.

  1. The variant catalog includes as further dimensions both the execution on the column-oriented data, as well as using special index structures and contains different result representations. An abstraction layer is then developed from all possible dimensions so that an algorithm can be defined independently of its optimizations. This should allow each variant to be generated and executed efficiently and with little redundant code
  2. .
  3. Due to the enormous variant space consisting of the dimensions of the variants including the influence of the executing processors, the choice of a variant to be executed is not trivial. The aim here is to compare learning-based methods with regard to their suitability for algorithm selection in order to make valid decisions. The decisions to be made should also be extended to the creation of indexes as well as the data distribution in objective (3).
  4. The load distribution in co-processor accelerated systems is influenced by the degree of parallelization. This degree is divided into several dimensions, as database operations can be divided into smaller functional units (so-called primitives). These primitives can either run on the entire database or be partitioned. All these optimization potentials (different granularity levels and partitioning sizes) must be analyzed and optimally selected to enable adequate performance under the given and future query load. The aim is to let a model learn in order to create optimal distributions and optimized plans. It is important that the model also allows conclusions to be drawn about its decisions in order to achieve generalizability.

This text was translated with DeepL

View project in the research portal

EXtracting Product Lines from vAriaNTs (EXPLANT II)
Duration: 01.09.2019 to 28.02.2022

A software product line (SPL) enables the systematic management of a set of reusable software artifacts and thus the efficient generation of different variants of a software. In practice, however, developers often create software variants on an ad-hoc basis by copying software artifacts and adapting them to new requirements (clone-and-own). The lack of systematics and automation here often makes the maintenance and further development of variants time-consuming and error-prone. We therefore propose a step-by-step migration of cloned software variants into a compositional (i.e. modular) SPL.
In the first project phase, we have already achieved remarkable results in the variant-preserving transformation and the corresponding analyses at model and code level. In the second phase, we now want to build on the knowledge gained from this. These are in particular: (1) An automated migration based only on code clone detection does not produce coherent software artifacts with a certain functionality. (2) Some potential cooperation partners were reluctant to migrate their systems, fearing the introduction of new bugs. (3) Annotative SPLs seem to be less error-prone and thus more robust to changes than previously assumed.
Due to the problems with industrial partners (2), we concluded that further research is needed, especially on quality assurance of migrated SPL, migration costs and properties of software artifacts. We therefore want to investigate which cost factors play a role in the migration and deployment of SPL and how strong their influence is in each case. We also plan to identify quality metrics for migrated SPL. In the first project phase, we have already proposed a partially automated migration process (1), which we now want to expand further and integrate new analyses. In particular, we want to investigate whether useful information, especially about the developers' intentions, can be obtained from sources other than the code. Promising approaches here are the analysis of version management systems and the analysis of existing behavioral and architectural models of a system. We also intend to use further refactorings, such as "Move Method", to increase the degree of automation. In order to improve the structure and therefore the maintainability of the resulting modularization, we are also planning to expand our migration process to include multi-software product lines. This would make it easier to separate individual functionalities of a system. We also want to investigate which granularity is best suited for migrated software artifacts and whether annotative methods (3) can provide advantages over compositional methods for migrated SPL.
This text was translated with DeepL on 28/11/2025

View project in the research portal

MetaProteomeAnalyzer Service (MetaProtServ)
Duration: 01.12.2016 to 31.12.2021

Targeting cellular functions metaproteomics complements metagenomics and metatranscriptomics as tools widely applied in microbial ecology (e.g. human gut microbiome, biogas plants). Bioinformatic
tools developed for proteomics of pure cultures cannot satisfactorily deal with metagenome sequences required for protein identification in microbial communities and with redundancies in search results with respect to taxonomy and function of identified proteins. In order to exploit more information from current metaproteome datasets the MetaProteomAnalyzer (MPA) software was developed. Within MetaProtServ the GUI based MPA will be deployed as a webservice convincing more scientists from benefits of metaproteomics . Usability and maintainability of the software will be increased as the standalone MPA and webservice finally hosted at BiGi will have the same architecture and share the same code base. The MPA will be extended to support standard interfaces for import and export (mzIdentML) of metaproteomic datasets. Training and support for the scientific community will be intensified for different levels of users including experts and developers.

View project in the research portal

COOPeR: Cross-device OLTP/OLAP PRocessing
Duration: 01.09.2016 to 30.06.2021

Database Management Systems (DBMS) face two challenges today. On the one hand DBMS must handle Online Transaction Processing (OLTP), and Online Analytical Processing (OLAP) in combination to enable real-time analysis of business processes. The real-time analysis of business processes is necessary to improve the quality of reports and analyzes, since fresh data is favored for modern analysis rather than historical data only. On the other hand, computer systems become increasingly heterogeneous to provide better hardware performance. The architecture changes from single-core CPUs to multi-core CPUs supported by several co-processors. These trends must be considered in DBMS to improve the quality and performance, and to ensure that DBMS satisfy future requirements (e.g., more complex queries, or more increased data volume). Unfortunately, current research approaches address only one of these two challenges: either the combination of OLTP and OLAP workloads in traditional CPU-based systems, or co-processor acceleration for a single workload type is considered. Therefore, an unified approach addressing both challenges at once is missing. In this project we want to include both challenges of DBMS to enable efficient processing of combined OLTP / OLAP workloads in hybrid CPU / Co-processor systems. This is necessary in order to realize real-time business intelligence. The main challenge is guaranteeing the ACID properties for OLTP, while at the same time to combine and to process efficiently OLTP / OLAP workloads in such a hybrid systems.

View project in the research portal

Recommending Cloned Features for Adopting Systematic Software Reuse
Duration: 01.05.2018 to 30.04.2021

Organizations heavily rely on forking (or cloning) to implement customer-specific variants of a system. While this approach can have several disadvantages, organizations fear to extract reusable features later on, due to the corresponding efforts and risks. A particularly challenging, yet poorly supported, task is to decide what features to extract. To tackle this problem, we aim to develop an analysis system that proposes suitable features based on automated analyses of the cloned legacy systems. To this end, we are concerned with a several closely related research areas: Cost modeling for software product lines; empirical studies on system evolution, processes, and human factors; as well as concepts to derive reusable features from clones based on, for example, feature location and code clone detection.

View project in the research portal

Adaptive Data Management in Evolving Heterogeneous Hardware/Software Systems
Duration: 01.10.2017 to 31.12.2020

Currently, database systems face two big challenges: First, the application scenarios become more and more diverse ranging from purely relational to graph-shaped or stream-based data analysis. Second, the hardware landscape becomes more and more heterogeneous with standard multi-core Central Processing Units (CPUs) as well as specialized high-performance co-processors such as Graphics Processing Unit (GPUs) or Field Programmable Gate Arrays (FPGAs).
Recent research shows that operators designed for co-processors can outperform their CPU counterparts. However, most of the approaches focus on single-device processing to speedup single analyses not considering overall system performance. Consequently, they miss hidden performance potentials of parallel processing across all devices available in the system. Furthermore, current research results are hard to generalize and, thus, cannot be applied to other domains and devices.
In this project, we aim to provide integration concepts for diverse operators and heterogeneous hardware devices in adaptive database systems. We work on optimization strategies not only exploiting individual device-specific features but also the inherent cross-device parallelism in multi-device systems. Thereby we focus on operators from the relational and graph domain to derive concepts not limited to a certain application domain. To achieve the project goals, interfaces and abstraction concepts for operators and processing devices have to be defined. Furthermore, operator and device characteristics have to be made available to all system layers such that the software layer can account for device specific features and the hardware layer can adapt to the characteristics of the operators and data. The availability of device and operator characteristics is especially important for global query optimization to find a suitable execution strategy. Therefore, we also need to analyze the design space for query processing on heterogeneous hardware, in particular with regards to functional, data and cross-device parallelism. To handle the enormous complexity of the query optimization design space incurred by the parallelism, we follow a distributed optimization approach where optimization tasks are delegated to the lowest possible system layer. Lower layers also have a more precise view on device-specific features allowing to exploit them more efficiently. To avoid interferences of optimization decisions at different layers, a focus is also set on cross-layer optimizations strategies. These will incorporate learning-based techniques for evaluating optimization decisions at runtime to improve future optimization decisions. Moreover, we expect that learning-based strategies are best suited to integrate device-specific features not accounted for by the initial system design, such as it is often the case with the dynamic partial reconfiguration capabilities of FPGAs.

View project in the research portal

Efficient and Effective Entity Resolution Under Cloud-Scale Data
Duration: 01.07.2014 to 30.04.2020

There might exist several different descriptions for one real-world entity. The differences may result from typographical errors, abbreviations, data formatting, etc. However, the different descriptions may lower data quality and  lead to misunderstanding. Therefore, it is necessary to be able to resolve and clarify such different descriptions. Entity Resolution (ER) is a process to identify records that refer to the same real-world entity. It is also known under several other names. If the records to be identified are all located within a single source, it is called de-duplication. Otherwise,  in the field of computer science it is also typically referred to data matching, record linkage, duplicate detection, reference reconciliation, object identification. In the database domain, ER is synonymous with similarity join. Today, ER plays a vital role in diverse areas, not only in the traditional applications of census, health data or national security, but also in the network applications of business mailing lists, online shopping, web searches, etc. It is also an indispensable step in data cleaning, data integration and data warehousing. The use of computer techniques to perform ER dates back to the middle of the last century. Since then, researchers have developed many techniques and algorithms for ER due to its extensive applications. In its early days, there are two general goals: efficiency and effectiveness, which means how fast and how accurately an ER task can be solved. In recent years, the rise of the web has led to the extension of techniques and algorithms for ER. Such web data (also known as big data) is often semi-structured, comes from diverse domains and exists on a very large scale. These three properties make big data qualitatively different from traditional data, which brings new challenges to ER that require new techniques or algorithms as solutions. To be specific, specialized similarity measures are required for semi-structured data; cross-domain techniques are needed to handle data from diverse domains; parallel techniques are needed to make algorithms not only efficient and effective, but also scalable, so as to be able to deal with the large scale of the data. This project focuses on the last point: parallelize the process of entity resoution. The specific research direction is to explore several big data processing frameworks to know their advantages and disadvantages on performing ER.

View project in the research portal

Software Product Line Feature Extraction from Natural Language Documents using Machine Learning Techniques
Duration: 11.05.2016 to 29.02.2020

Feature model construction from the requirements or textual descriptions of products can be often tedious and ineffective. In this project, through automatically learning natural language documents of products, cluster tight-related requirements into features in the phase of domain analysis based on machine learning techniques. This method can assist the developer by suggesting possible features, and improve the efficiency and accuracy of feature modeling to a certain extent.

This research will focus on feature extraction from requirements or textual descriptions of products in domain analysis. Extract the descriptors from requirements or textual descriptions of products. Then, descriptors are transformed into vectors and form a word vector space. Based on clustering algorithm, a set of descriptors are clustered into features. Their relationships will be inferred. Design the simulation experiment of feature extraction from natural language documents of products to prove that it can handle feature-extracting in terms of machine learning techniques.

View project in the research portal

Legal Horizon Scanning
Duration: 04.04.2017 to 30.11.2019

Every company needs to be compliant with national and international laws and regulations. Unfortunately, staying complied is a challenging tasks based on the volume and velocity of laws and regulations. Furthermore, laws are often incomplete or inconclusive, whereby also court judgments need to be considered for compliance. Hence, companies in different sectors, e.g. energy, transport, or finance, are spending millions of dollars every year to ensure compliance each year. In this project, we want to automate the process of identifying and analyzing the impact of (changing) laws, regulations, and court judgments using a combination of Information Retrieval, Data Mining and Scalable Data Management techniques. Based on the automated identification and impact analysis, not only the costs for compliance can be reduced, but also the quality can be increased.

View project in the research portal

GPU-accelerated Join-Order Optimization
Duration: 01.10.2016 to 09.11.2019

Different join orders can lead to a variation of execution times by several orders of magnitude, which makes join-order optimization to one of the most critical optimizations within DBMSs. At the same time, join-order optimization is an NP-hard problem, which makes the computation of an optimal join-order highly compute-intensive. Because current hardware architectures use highly specialized and parallel processors, the sequential algorithms for join-order optimization proposed in the past cannot fully utilize  the computational power of current hardware architectures. Although existing approaches for join-order optimization such as dynamic programming benefit from parallel execution, there are no approaches for join-order optimization on highly parallel co-processors such as GPUs. 
In this project, we are building a GPU-accelerated  join-order optimizer by adapting existing join-order optimization approaches. Here, we are interested in the effects of GPUs on join-order optimization itself as well as the effects for query processing. For GPU-accelerated DBMSs, such as CoGaDB, using GPUs for query processing, we need to identify efficient scheduling strategies for query processing and query optimization tasks such that the GPU-accelerated optimization does not 
slow down query processing on GPUs.

View project in the research portal

(Semi)-Automatic Approach to Support Literature Analysis for Software Engineers
Duration: 01.11.2017 to 31.10.2019

Researchers perform literature reviews to synthesize existing evidence regarding a research topic. While being important means to condense knowledge, conducting a literature analysis, particularly, systematic literature review, requires a large amount of time and effort. Consequently, researchers are considering semi-automatic approaches to facilitate different stages of the review process. Surveys have shown that two of the most time consuming tasks within the literature review process are: to select primary studies and to assess their quality. To assure quality and reliability of the findings from a literature study, the quality of included primary studies must be evaluated. Despite being critical stages, these still lack the support of semi-automatic tools and hence, mostly performed manually. In this PhD thesis, we aim to address this gap in the current state of research and develop techniques that support the selection and assessment of primary studies for literature analyses. For the assessment of studies, we begin with exploring the information available from the digital libraries most commonly used by software engineering researchers, such as, the ACM Digital Library, IEEE Xplore, Science Direct, Springer Link, Web of Science. The information regarding authors, citation counts and publication venues are particularly important as these can provide an initial insight about the studies. Hence, a tool that captures such bibliographic information from the digital libraries and score the studies based on defined quality metrics, would certainly be beneficial to accelerate the process. However, for accurate assessment, the approach could be further extended to an in-depth full text investigation. We believe, developing such a strategy would indeed be useful for researchers conducting literature analyses, particularly software engineers, or any other research domain.

View project in the research portal

Graph-Based Analysis of Highly-Configurable Systems
Duration: 01.11.2015 to 01.11.2018

Todays's software systems are getting more complex every day and contain an increasing number of configuration options to customize their behavior. Developers of these highly-configurable systems face the challenge of finding faults within the variable source code and maintaining it without introducing new ones.

In order to understand variable source code of even medium-sized systems developers have to rely on multiple analysis techniques. However, current analysis techniques often do not scale well with the
number of configuration options or utilize heuristics which lead to results that are less reliable.

We propose an alternative approach for analyzing highly-configurable systems based on graph theory.

Both variability models, which describe a system's configuration options and their interdependencies, and variable source code can be represented by graph-like data structures.

Therefore, we want to introduce novel analysis techniques based on well-known graph algorithms and evaluate them regrading their result quality and performance during runtime.

View project in the research portal

Secure Data Outsourcing to Untrusted Clouds
Duration: 01.10.2014 to 30.09.2018

Cloud storage solutions are being offered by many big vendors like Google, Amazon & IBM etc. The need of Cloud Storage has been driven by the generation of Big Data in almost every corporation. The biggest hurdle in outsourcing data to Cloud Data vendors is the Security Concern of the data owners. These security concerns have become the stumbling block in large scale adoption of Third Party Cloud Databases. The focus of this PhD project is to give a comprehensive framework for the Security of Outsourced Data to Untrusted Clouds. This framework includes Encrypted Storage in Cloud Databases, Secure Data Access, Privacy of Data Access & Authenticity of Stored Data in the Cloud. This security framework will be based on Hadoop based open source projects.

View project in the research portal

On the Impact of Hardware on Relational Query Processing
Duration: 01.09.2013 to 31.08.2018

Satisfying the performance needs of tomorrow typically implies using modern processor capabilities (such as single instruction, multiple data) and co-processors (such as graphics processing units) to accelerate database operations. Algorithms are typically hand-tuned to the underlying (co-)processors. This solution is error-prone, introduces high implementation and maintenance cost and is not portable to other (co-)processors. To this end, we argue for a combination of database research with modern software-engineering approaches, such as feature-oriented software development (FOSD). Thus, the goal of this project is to generate optimized database algorithms tailored to the underlying (co-)processors from a common code base. With this, we maximize performance while minimizing implementation and maintenance effort in databases on new hardware. Project milestones: 

  • Creating a feature model: Arising from heterogeneous processor capabilities, promising capabilities have to be identified and structured to develop a comprehensive feature model. This includes fine-grained features that exploit the processor capabilities of each device.
  • Annotative vs. compositional FOSD approaches: Both approaches have known benefits and drawbacks. To have a suitable mechanism to construct hardware-tailored database algorithms using FOSD, we have to evaluate which of these two approaches is the best for our scenario.
  • Mapping features to code: Arising from the feature model, possible code snippets to implement a feature have to be identified.
  • Performance evaluation: To validate our solution and derive rules for processor allocation and algorithm selection, we have to perform an evaluation of our algorithms.

View project in the research portal

Model-Based Refinement of Product Lines
Duration: 01.04.2015 to 31.03.2018

Software product lines are families of related software systems that are developed by taking variability into account during the complete development process. In model-based refinement methods (e.g., ASM, Event-B, Z, VDM), systems are developed by stepwise refinement of an abstract, formal model.

In this project, we develop concepts to combine model-based refinement methods and software product lines. On the one hand, this combination aims to improve the cost-effectiveness of applying formal methods by taking advantage of the high degree of reuse provided by software product lines. On the other hand, it helps to handle the complexity of product lines by providing means to detect defects on a high level of abstraction, early in the development process.

View project in the research portal

EXtracting Product Lines from vAriaNTs (EXPLANT)
Duration: 16.02.2016 to 15.02.2018

Software product lines enable the strategic reuse of software and handle variability in a systematic way. In practice, however, reuse and variability are often implemented ad hoc, by copying and adapting artifacts (the clone-and-own approach). Due to a lack of automation, propagating changes (e.g. error corrections, performance improvements) to several cloned product variants and exchanging functionality between variants is time-consuming and error prone.

To solve these problems, we propose the stepwise migration of cloned product variants to a compositional software product line (SPL). First, all of the variants are integrated unaltered into an initial SPL. Subsequently, this SPL is transformed into a well-structured, modular target SPL by means of small, semantics-preserving steps. Compared to existing approaches to migrate product variants to an SPL, this course of action provides the following advantages:

1) The SPL can be used in production immediately. Up until now, production had to be halted for extended periods of time because migration could not be interrupted.

2) The composition-based implementation approach supports maintainability. This avoids the problems associated with annotation-based SPL implementation techniques (e. g. lack of modularization, hard to read program code), which are widely used in practice.

3) Semantics-preservation of the original variants is guaranteed.

The core of our project is the research of variant-preserving refactoring. By this, we mean consistent transformations on the model as well as the implementation level, which are semantics-preserving with respect to all possible products of the SPL. These refactorings are combined with code clone detection in order to increase reuse and thereby decrease maintenance costs and future defect rates. Moreover, we will research feature location techniques in multiple product variants. Combined with variant-preserving refactoring, these techniques allow for the stepwise extraction of functionality from multiple product variants. Not only can we reconstruct the original variants by composing the extracted features, but we can even create new variants. Thereby, new requirements are addressed even more effectively.

View project in the research portal

A Personalized Recommender System for Product-Line Configuration
Duration: 15.01.2015 to 31.12.2017

Today s competitive marketplace requires industries to understand the unique and particular needs of their customers. Software product line enables industries to create individual products for every customer by providing an interdependent set of features that can be configured to form personalized products. However, as most features are interdependent, users need to understand the impact of their gradual decisions in order to make the most appropriate choices. Thus, especially when dealing with large feature models, specialized assistance is needed to guide the users personalizing valid products. In this project, we aim using recommender system and search-based software engineering techniques to handle the product configuration process in large and complex product lines.

View project in the research portal

Software Product Line Testing
Duration: 01.10.2013 to 30.09.2017

Exhaustively testing every product of a software product line (SPL) is a difficult task due to the combinatorial explosion of the number of products. Combinatorial interaction testing is a technique to reduce the number of products under test. In this project, we aim to handle multiple and possibly conflicting objectives during the test process of SPL.

View project in the research portal

Southeast Asia Research Network: Digital Engineering
Duration: 01.06.2013 to 31.05.2017

German research organizations are increasingly interested in outstanding Southeast Asian institutions as partners for collaboration in the fields of education and research. Bilateral know-how, technology transfer and staff exchange as well as the resultant opportunities for collaboration are strategically important in terms of research and economics. Therefore, the establishment of a joint research structure in the field of digital engineering is being pursued in the project "SEAR DE Thailand" under the lead management of Otto von Guericke University Magdeburg (OvGU) in cooperation with the Fraunhofer Institute for Factory Operation and Automation (IFF) and the National Science and Technology Development Agency (NSTDA) in Thailand.

View project in the research portal

Modern Data Management Technologies for Genome Analysis
Duration: 01.12.2013 to 31.12.2016

Genome analysis is an important method to improve disease detection and treatment. The introduction of next generation sequencing techniques allows to generate genome data for genome analysis in less time and at reasonable cost. In order to provide fast and reliable genome analysis, despite ever increasing amounts of genome data, genome data management and analysis techniques must also improve. In this project, we develop concepts and approaches to use modern database management systems (e.g., column-oriented, in-memory database management systems) for genome analysis. Project's scope:

  • Identification and evaluation of genome analysis use cases suitable for database support
  • Development of data management concepts for genome analysis using modern database technology with regard to chosen use cases and data management aspects such as data integration, data integrity, data provenance, data security
  • Development of efficient data structures for querying and processing genome data in databases for defined use cases
  • Exploiting modern hardware capabilities for genome data processing

View project in the research portal

Sustainable variability management of feature-oriented software product lines (NaVaS)
Duration: 01.09.2014 to 31.08.2016

The use of product line technology, as has been used successfully in the automotive industry for decades, offers enormous potential to revolutionize software development. Based on the reuse of engine types that can be combined with different car bodies of a car manufacturer, software product lines enable the creation of customized software products based on common software components. The aim of the NaVaS project is therefore to simplify the development of software products on the basis of software product lines and thus promote the establishment of this development technology.

Core work of the project
To support the development of software product lines, the NaVaS project is developing a software development environment for the creation of product lines. This is based on an existing research demonstrator and will be adapted both functionally and from the user's point of view to the requirements of commercial business and research. Here, many years of experience in the development of customized software on the part of METOP GmbH and the research of alternative technologies on the part of the University of Magdeburg are combined and their practicability is ensured with the help of suitable associated partners from industry and research. The provision of a development environment for software product lines, in line with the research demonstrator, thus opens up new possibilities. Development times would be greatly reduced and products would be available on the market more quickly. Additional costs could be saved due to the reduced maintenance effort.
This text was translated with DeepL

View project in the research portal

Software Product Line Languages and Tools III
Duration: 01.07.2012 to 31.12.2015

In this project we focus on research and development of tools and languages for software product lines. Our research focuses usability, flexibility and complexity of current approaches. Research includes tools as FeatureHouse, FeatureIDE, CIDE, FeatureC++, Aspectual Mixin Layers, Refactoring Feature Modules, and formalization of language concepts. The research centers around the ideas of feature-oriented programming and explores boundaries toward other development paradigms including type systems, refactorings, design patterns, aspect-oriented programming, generative programming, model-driven architectures, service-oriented architectures and more. 

  • FeatureIDE: An Extensible Framework for Feature-Oriented Software Development
  • SPL2go: A Catalog of Publicly Available Software Product Lines

View project in the research portal

query optimization, query processing, gpu-accelerated datamangement, self-tuning
Duration: 01.04.2014 to 31.03.2015

Performance demands for database systems are ever increasing and a lot of research focus on new approaches to fulfill performance requirements of tomorrow. GPU acceleration is a new arising and promising opportunity to speed up query processing of database systems by using low cost graphic processors as coprocessors. One major challenge is how to combine traditional database query processing with GPU coprocessing techniques and efficient database operation scheduling in a GPU aware query optimizer. In this project, we develop a Hybrid Query Processing Engine, which extends the traditional physical optimization process to generate hybrid query plans and to perform a cost based optimization in a way that the advantages of CPUs and GPUs are combined. Furthermore, we aim at a database architecture and data model independent solution to maximize applicability.

  • HyPE-Library
    • HyPE is a hybrid query processing engine build for automatic selection of processing units for coprocessing in database systems. The long-term goal of the project is to implement a fully fledged query processing engine, which is able to automatically generate and optimize a hybrid CPU/GPU physical query plan from a logical query plan. It is a research prototype developed by the Otto-von-Guericke University Magdeburg in collaboration with Ilmenau University of Technology
  • CoGaDB
    • CoGaDB is a prototype of a column-oriented GPU-accelerated database management system developed at the University of Magdeburg. Its purpose is to investigate advanced coprocessing techniques for effective GPU utilization during database query processing. It uses our hybrid query processing engine (HyPE) for the physical optimization process. 

View project in the research portal

Clustering the Cloud - A Model for Self-Tuning of Cloud Datamanagement Systems
Duration: 01.10.2011 to 31.03.2015

Over the past decade, cloud data management systems became increasingly popular, because they provide on-demand elastic storage and large-scale data analytics in the cloud. These systems were built with the main intention of supporting scalability and availability in an easily maintainable way. However, the (self-) tuning of cloud data management systems to meet specific requirements beyond these basic properties and for possibly heterogeneous applications becomes increasingly complex. Consequently, the self-management ideal of cloud computing is still to be achieved for cloud data management. The focus of this PhD project is (self-) tuning for cloud data management clusters that are serving one of more applications with divergent workload types. It aims to achieve dynamic clustering to support workload based optimization. Our approach is based on logical clustering within a DB cluster based on different criteria such as: data, optimization goal, thresholds, and workload types.

View project in the research portal

Analysis Strategies for Software Product Lines
Duration: 01.02.2010 to 31.12.2014

Software-product-line engineering has gained considerable momentum in recent years, both in industry and in academia. A software product line is a set of software products that share a common set of features. Software product lines challenge traditional analysis techniques, such as type checking, testing, and formal verification, in their quest of ensuring correctness and reliability of software. Simply creating and analyzing all products of a product line is usually not feasible, due to the potentially exponential number of valid feature combinations. Recently, researchers began to develop analysis techniques that take the distinguishing properties of software product lines into account, for example, by checking feature-related code in isolation or by exploiting variability information during analysis. The emerging field of product-line analysis techniques is both broad and diverse such that it is difficult for researchers and practitioners to understand their similarities and differences (e.g., with regard to variability awareness or scalability), which hinders systematic research and application. We classify the corpus of existing and ongoing work in this field, we compare techniques based on our classification, and we infer a research agenda. A short-term benefit of our endeavor is that our classification can guide research in product-line analysis and, to this end, make it more systematic and efficient. A long-term goal is to empower developers to choose the right analysis technique for their needs out of a pool of techniques with different strengths and weaknesses.

  • Stepwise Migration of Cloned Product Variants to a Compositional Software Product Line: This part of the project aims at consolidating such cloned product families into a well-structured, modular software product line.  The consolidation process is semi-automatic and stepwise, where each step is a small, semantics-preserving transformation of the code, the feature model or both.  These semantics-preserving transformations are called variant-preserving refactorings.
  • View project in the research portal

    Consistent data management for cloud gaming
    Duration: 01.07.2012 to 31.12.2014

    Cloud storage systems are able to meet the future requirements of the Internet by using non-relational database management systems (NoSQL DBMS). NoSQL system simplifies the relational database schema and the data model to improve system performances, such as system scalability and parallel processing. However, such properties of cloud storage systems limit the implementation of some Web applications like massively multi-player online games (MMOG). In the research described here, we want to expand existing cloud storage systems in order to meet requirements of MMOG. We propose to build up a transaction layer on the cloud storage layer to offer flexible ACID levels. As a goal the transaction processing should be offered to game developers as a service. Through the use of such an ACID level model both the availability of the existing system and the data consistency during the interactivity of multi-player can be converted according to specific requirements.

    View project in the research portal

    Load-balanced Index Structures for Self-tuning DBMS
    Duration: 01.01.2010 to 31.12.2014

    Index tuning as part of database tuning is the task of selecting and creating indexes with the goal of reducing query processing times. However, in dynamic environments with various ad-hoc queries it is difficult to identify potentially useful indexes in advance. The approach for self-tuning index cogurations developed in previous research provides a solution for continuous tuning on the level of index configurations, where configurations are a set of common index structures. In this project we investigate a novel approach, that moves the solution of the problem at hand to the level of the index structures, i.e. to create index structures which have an inherently self-optimizing structure.

    View project in the research portal

    Minimal-invasive integration of the provenance concern into data-intensive systems
    Duration: 01.11.2013 to 31.12.2014

    In the recent past a new research topic named provenance gained much attention. The purpose of provenance is to determine origin and derivation history of data. Thus, provenance is used, for instance, to validate and explain computation results. Due to the digitalization of previously analogue process that consume data from heterogeneous sources and increasing complexity of respective systems, it is a challenging task to validate computation results. To face this challenge there has been plenty of research resulting in solutions that allow for capturing of provenance data. These solutions cover a broad variety of approaches reaching from formal approaches defining how to capture provenance for relational databases, high-level data models for linked data in the web, to all-in-one solutions to support management of scientific work ows. However, all these approaches have in common that they are tailored for their specific use case. Consequently, provenance is considered as an integral part of these approaches that can hardly be adjusted for new user requirements or be integrated into existing systems. We envision that provenance, which highly needs to be adjusted to the needs of specific use cases, should be a cross-cutting concern that can seamlessly be integrated without interference with the original system.

    View project in the research portal

    MultiPLe - Multi Software Product Lines
    Duration: 01.03.2012 to 31.10.2014

    The increasing spread of software product lines is resulting in multi-software product lines (multi-product lines for short), complex software systems that are created from a large number of interdependent software product lines. The aim of the project is to develop concepts and methods for the systematic development of multi-product lines. The focus of the second project phase is the generalization of developed concepts in order to achieve composition security and interoperability in heterogeneous multi-product lines that are developed with different programming paradigms and variability mechanisms. For this purpose, it must be ensured for all valid configurations of a multi-product line that the configuration of the product lines involved are coordinated with each other, so that the functionality required by one product line is provided by another product line (semantic interoperability) and syntactic correctness, e.g. of method calls (syntactic interoperability), is guaranteed. The aim is therefore to achieve composition safety at model level in order to abstract from implementation details and to guarantee interoperability at implementation level (e.g. type safety) across different variability mechanisms. Only in this way is a scaling application of product line technology for the development of complex heterogeneous software systems possible.
    This text was translated with DeepL

    View project in the research portal

    A Hybrid Query Optimization Engine for GPU accelerated Database Query Processing
    Duration: 01.04.2012 to 31.03.2014

    Performance demands for database systems are ever increasing and a lot of research focus on new approaches to fulfill performance requirements of tomorrow. GPU acceleration is a new arising and promising opportunity to speed up query processing of database systems by using low cost graphic processors as coprocessors. One major challenge is how to combine traditional database query processing with GPU coprocessing techniques and efficient database operation scheduling in a GPU aware query optimizer. In this project, we develop a Hybrid Query Processing Engine, which extends the traditional physical optimization process to generate hybrid query plans and to perform a cost based optimization in a way that the advantages of CPUs and GPUs are combined. Furthermore, we aim at a database architecture and data model independent solution to maximize applicability.

    • HyPE-Library
      • HyPE is a hybrid query processing engine build for automatic selection of processing units for coprocessing in database systems. The long-term goal of the project is to implement a fully fledged query processing engine, which is able to automatically generate and optimize a hybrid CPU/GPU physical query plan from a logical query plan. It is a research prototype developed by the Otto-von-Guericke University Magdeburg in collaboration with Ilmenau University of Technology
    • CoGaDB
      • CoGaDB is a prototype of a column-oriented GPU-accelerated database management system developed at the University of Magdeburg. Its purpose is to investigate advanced coprocessing techniques for effective GPU utilization during database query processing. It uses our hybrid query processing engine (HyPE) for the physical optimization process.  

    View project in the research portal

    STIMULATE -> Management/trainees -> Management and organizational structure
    Duration: 01.03.2013 to 28.02.2014

    Concepts for improved operation preparation and execution as well as long-term quality assurance are being considered in the project. A framework concept will be developed that serves as the basis for the development of a data and process model for the research campus with the aim of efficient integration and new development of innovative infrastructures. The provenance-sensitive storage and processing of medical data provides an adapted trade-off between the requirements for the storage and processing of data in terms of traceability and reproducibility on the one hand and the requirements of data protection on the other.
    This text was translated with DeepL

    View project in the research portal

    ViERforES-II: Interoperability
    Duration: 01.01.2011 to 30.09.2013

    The functionality of new products is achieved through an increasing proportion of software in the form of embedded systems. In interaction with other function-determining components of complex technical systems, this requires new technologies for mastering the highest safety and reliability of product developments. The aim of ViERforES is to use virtual and augmented reality to make non-physical product properties visible and thus develop adequate methods and tools for engineering.

    Providing solutions for a holistic view of complex products or systems during development, testing and operation poses major challenges for information technology. Among other things, independently modeled components must be integrated into an overall context, for which virtual or augmented reality can be used as an integrated workspace. The aim of the sub-project "Interoperability for digital products with embedded systems" is therefore to ensure the interoperability of the heterogeneous systems involved and the models they manage. This ranges from the syntactic (different interfaces, data models, etc.) to the semantic (meaning and connection of differently modeled data and functionalities) to the pragmatic level (use by users, support of workflows, cooperation).

    The second phase will focus in particular on non-functional interoperability between systems and interoperability between heterogeneous simulation systems.
    This text was translated with DeepL

    View project in the research portal

    ViERforES-II: Trustworthy systems
    Duration: 01.01.2011 to 30.09.2013

    In this work package of the sub-project "Trustworthy Systems", the reliability of embedded systems at source code level is investigated with a focus on program comprehension and maintainability. The aim is to investigate and implement concepts and visualizations to improve program comprehension. The result at this level should be a prototype component for a development environment in which concepts for optimal support of program comprehension are implemented. This should, for example, enable security vulnerabilities to be identified and rectified at source code level and support the maintenance of software, thereby reducing maintenance costs. Comprehensive empirical studies should show that the implemented concepts can reduce security risks and improve software maintenance.
    This text was translated with DeepL

    View project in the research portal

    Virtual and Augmented Reality for Highest Safety and Reliability of Embedded Systems - Phase II (ViERforES II)
    Duration: 01.01.2011 to 30.09.2013

    Under the title Virtual and Augmented Reality for Maximum Safety and Reliability of Embedded Systems (ViERforES), a network of university and application-oriented research began to tackle the challenges posed by the increased use of modern information and communication technologies in the application fields of automotive/mobility, medical technology/neuroscience and energy systems.
    What these three fields of application have in common is that the products to be developed in these areas realize their functionality through a growing proportion of software. In order for products from Germany to continue to meet their high quality and reliability standards, it is necessary to develop new engineering methods. The established methods of product and process development must therefore also be extended to software engineering.
    The results achieved by ViERforES were demonstrated by setting up demonstrators in each field of application. As a result, industrial companies were recruited to join the project consortium in the subsequent ViERforES II project. Their task is to support the application-oriented further development of the demonstrators so that the functional testing of their products and processes can take place in a virtual environment in future.
    This text was translated with DeepL

    View project in the research portal

    Digi-Dak (Digitale Fingerspuren) - Teilprojekt "Datenvorverarbeitung und Datenhaltung"
    Duration: 01.01.2010 to 31.05.2013

    Das Projekt Digi-Dak widmet sich der Erforschung von Mustererkennungstechniken für Fingerspuren, welche mittels berührungsloser optischer 3D Oberflächensensortechnik erfasst werden. Das generelle Ziel ist es, eine Verbesserung/Unterstützung der kriminalistischen Forensik (Daktyloskopie) zu erzielen. Insbesondere liegt der Fokus des Projektes dabei auf potentiellen Szenarien in präventiven und forensischen Prozessen, speziell auch für die Überlagerung von Spuren oder die Altersdetektion.   Ziel des Teilprojektes Datenvorverarbeitung und Datenhaltung ist es, die erfassten (dreidimensionalen) Sensordaten aufzubereiten und so zu speichern, dass der automatisierte Prozess der Fingerspurenerfassung unterstützt bzw. verbessert wird. In diesem Rahmen werden Methoden zur effizienten Speicherung und Anfragebearbeitung von hochdimensionalen Daten erforscht. Darüber hinaus sollen Methoden und Konzepte erforscht werden, die die Beweiskraft der erfassten Fingerspuren auch nach deren Vor- bzw. Weiterverarbeitung gewährleisten.

    View project in the research portal

    Optimization and self-management concepts for data warehouse systems
    Duration: 01.01.2011 to 12.04.2013

    Data warehouse systems have been used for some time for market and financial analyses in many areas of the economy. The areas of application for these systems are constantly expanding, and the amount of data to be stored (historical data) is also increasing ever faster. As these are often very complex and time-critical applications, the analyses and calculations on the data must be continually optimized. The ever-increasing performance of computer and server systems alone is not sufficient for this, as the applications constantly require new requirements and increasingly complex calculations. This also makes it clear that the time and financial effort required to operate such systems is immense.

    The aim of this project is to investigate the possibilities of extending existing approaches and integrating new proposals into existing systems in order to increase their performance. In order to achieve this goal, approaches from the field of self-tuning are to be used, as this enables the systems to adapt autonomously to constantly changing framework conditions and requirements. These approaches are to be improved through extensions such as the support of bitmap indices. Furthermore, reference is to be made to deeper levels of optimization, whereby physical optimization is to be made possible (autonomous) and easier.
    This text was translated with DeepL

    View project in the research portal

    Software Product Line Languages and Tools II
    Duration: 01.01.2011 to 30.06.2012

    This project focuses on research and development of tools and languages for software
    product line development. The research aims at improving usability, flexibility and complexity of current approaches. This includes tools as FeatureC++, FeatureIDE, and CIDE as well as concepts like Aspect Refinement, Aspectual Mixin Layers, and formalization of language concepts. The research centers around the ideas of feature-oriented programming and explores boundaries toward other development paradigms including design patterns, aspect-oriented programming, generative programming, model-driven architectures, service-oriented architectures and more.

    View project in the research portal

    MultiPLe - Multi Software Product Lines
    Duration: 01.09.2009 to 31.12.2011

    The aim of the MultiPLe project is to develop and extend techniques and concepts for modeling, implementing, and configuring multiple interdependent product lines (MultiPLs). This includes new approaches for modeling dependencies between SPLs, extending techniques for SPL implementation, automation of SPL development, and optimization of MultiPLs. In this project, we focus on feature-oriented product line engineering and associated modeling techniques like feature-oriented domain analysis (FODA) as well as different implementation techniques like component based software development, frameworks, preprocessors (e.g., the C/C++ preprocessor), and feature-oriented programming (FOP)

    View project in the research portal

    COMO B3 - IT-Security Automotive
    Duration: 01.09.2007 to 31.08.2011

    Immer mehr IT-Komponenten finden den Weg in ein (Kraft)-Fahrzeug, sei es zur Steigerung des Komforts oder der Sicherheit. Die entsprechenden autarken Steuergeräte kommunizieren dabei über verschiedene Bussysteme und begründen dabei das IT-System Automobil. Durch das erhöhte Aufkommen von Kommunikation (auch über externe Schnittstellen, z.B. car-2-car) steigt sowohl das Sicherheitsrisiko/-bedürfnis als auch die zu verarbeitenden Daten.
    Im Teilprojekt B3 des Forschungsprojektes COmpetence in MObility (COMO) sollen daher Konzepte für das automotive System geschaffen werden, um sowohl die Sicherheit im Auto dauerhaft zu gewaehrleisten (z.B. Abwehr gegen Angriffe auf IT-Komponenten) als auch das hohe Datenaufkommen auf effiziente Art und Weise durch Infrastruktursoftware (z.B. DBMS) zu handhaben.
    Für das Datenmanagement wird dabei eine Produktlinienentwicklung angestrebt, die durch Anwendung neuer Programmiertechniken sowohl den ressourcenbedingten Einschränkungen im Automobil gerecht wird als auch die Kosten für die Neuentwicklung einzelner Komponenten durch Wiederverwendung minimiert.

    Projektpartner sind Prof. J.Dittmann (AG Multimedia & Security) und Prof. G.Saake (AG Datenbanken) vom Institut für technische und betriebliche Informationssysteme (ITI) der OvGU als auch Prof. U. Jumar vom Institut für Automation und Kommunikation (ifak) der OvGU.

    View project in the research portal

    Datenschnittstellen und ganzheitliche Modelle für die funktionale Simulation (C1 Automotive)
    Duration: 01.09.2007 to 31.08.2011

    Ein ganzheitliches Virtual Engineering von der Entwicklung bis hin zur Fertigung von Produkten erfordert die Verbindung unterschiedlicher ingenieurwissenschaftlicher Disziplinen bezogen auf die Betrachtungsebenen und Detaillierungsgrade in ihren Modellwelten.
    Ziel dieses Teilprojektes, welches im Rahmen des COmpetence in MObility (COMO) Projektes läuft, beinhaltet die Beschreibung, Spezifikation und Entwicklung von Modell- und Schnittstellenwerkzeugen zur Verwaltung der Daten. Die Sammlung von Werkzeugen umfasst Datentransformationen, Meta-Datenbank, die Informationen über Modelle, Komponenten und das System enthält.
    Damit soll ein Beitrag zur Weiterentwicklung virtueller Technologien bzw. zur Verbesserung von deren Anwendbarkeit bei Engineering- und Planungsprozessen geleistet werden.

    Projektpartner des Teilprojektes sind Prof. U. Gabbert vom Institut für Mechanik (IFME) Magdeburg, Prof. R. Kaspar vom Institut für Mobile Systeme (IMS) Magdeburg und Prof. M. Schenk vom Institut für Logistik und Materialflusstechnik (ILM) Magdeburg.

    View project in the research portal

    Referenzdatenmodelle für mechatronischen Entwurf, Modellbildung und Simulation (C3 Automotive)
    Duration: 01.09.2007 to 31.08.2011

    Ein ganzheitliches Virtual Engineering von der Entwicklung bis hin zur Fertigung von Produkten erfordert die Verbindung unterschiedlicher ingenieurwissenschaftlicher Disziplinen bezogen auf die Betrachtungsebenen und Detaillierungsgrade in ihren Modellwelten.
    Das Teilprojekt C3, des COmpetence in MObility (COMO) Projektes, beinhaltet die Entwicklung einer Referenzdatenbank zur Verwaltung von komplexen Modellen und Abhängigkeiten, sowie die Spezifikation von Referenzdatenmodellen für den mechatronischen Entwurf, die Modellbildung und Simulation.
    Das ganzheitliche Referenzdatenmodell wird verschiedenartige (u.a. mechanische, elektrische, regelungstechnische) Modelle in virtuelle Produktkomponenten integrieren. Damit soll ein Beitrag zur Weiterentwicklung virtueller Technologien bzw. zur Verbesserung von deren Anwendbarkeit bei Engineering- und Planungsprozessen geleistet werden.

    Projektpartner des Teilprojektes ist Prof. M. Schenk vom Fraunhofer-Institut für Fabrikbetrieb und -automatisierung (IFF) Magdeburg.

    View project in the research portal

    Reflective and Adaptive Middleware for Software Evolution of Non-Stopping Information Systems
    Duration: 01.04.2008 to 31.08.2011

    Today's information systems still remain far from exhibiting the levels of agility required to operate in our very volatile and competitive (socio-techno-economical) environment. Such environments require updated/new business services to be easily and rapidly offered while ensuring a high-level of quality and certification. Towards that purpose, the present proposal addresses the rigorous development of self-adapting and run-time evolving information systems. The approach we propose is mainly interaction-centric. First, a reflective middleware is to be built with a UML-compliant base-level and a meta-level with evolutionary script-based rules and consistency checking of run-time self-adaptation and evolution. This reflective middleware is then to be enhanced by endowing it with a more general (domain-dependent) architecture with reconfiguration capabilities based on graph transformation rewriting techniques and property-oriented (temporal) logic. Transformation models will then be forwarded both at the base- and at the meta-level for formal validation and properties verification of the running (middleware-based) system on the basis of the (domain-based) architecture. Besides the proof of concepts with academic case studies, the project will be validated with a non-trivial case-study dealing with an urban traffic systems.

    View project in the research portal

    Optimization and Self-Tuning Approaches for Data Warehouses
    Duration: 15.04.2007 to 31.12.2010

    Data-Warehouse-Systeme werden seit einiger Zeit für Markt- und Finanzanalysen in vielen Bereichen der Wirtschaft eingesetzt. Die Anwendungsgebiete dieser Systeme erweitern sich dabei ständig, und zusätzlich steigen die zu haltenenden Datenmengen (historischer Datenbestand) immer schneller an. Da es sich oft um sehr komplexe und zeitkritische Anwendungen handelt, müssen die Analysen und Berechnungen auf den Daten immer weiter optimiert werden. Dazu allein reicht die stetig steigende Leistung von Rechner- und Serversystemen nicht aus, da die Anwendungen immer neue Anforderungen und komplexer werdende Berechnungen benötigen. Dadurch wird auch klar, daß der zeitliche und finanzielle Aufwand zum Betrieb solcher Systeme immens ist.

    Im Rahmen dieses Projekts soll untersucht werden, welche Möglichkeiten existieren, bisherige Ansätze zu erweitern und neue Vorschläge in bestehende System zu integrieren um die Leistung dieser zu steigern. Um dieses Ziel zu erreichen sollen Ansätze aus dem Bereich des Self-Tunings genutzt werden, denn so können die Systeme sich autonom an ständig ändernde Rahmenbedingungen und Anforderungen anpassen. Diese Ansätze sollen durch Erweiterungen wie zum Beispiel die Unterstützung von Bitmap-Indexen verbessert werden. Weiterhin soll Bezug genommen werden auf tiefere Ebenen der Optimierung, wodurch eine physische Optimierung möglich (autonom) und erleichtert werden soll.

    View project in the research portal

    Software Product Line Languages and Tools
    Duration: 25.11.2006 to 31.12.2010

    This project focuses on research and development of tools and languages for software
    product line development. The research aims at improving usability, flexibility and complexity of current approaches. This includes tools as FeatureC++, FeatureIDE, and CIDE as well as concepts like Aspect Refinement, Aspectual Mixin Layers, and formalization of language concepts. The research centers around the ideas of feature-oriented programming and explores boundaries toward other development paradigms including design patterns, aspect-oriented programming, generative programming, model-driven architectures, service-oriented architectures and more.

    View project in the research portal

    ViERforES - Interoperabilität für digitale Produkte mit eingebetteten Systemen
    Duration: 01.09.2008 to 31.12.2010

    Die Funktionalität neuer Produkte wird durch einen zunehmenden Anteil von Software in Form von Eingebetteten Systemen erzielt. Im Zusammenwirken mit anderen funktionsbestimmenden Komponenten komplexer technischer Systeme erfordert das neue Techologien zur Beherrschung von höchster Sicherheit und Zuverlässigkeit von Produktentwicklungen. Ziel von VIERforES ist es, durch Einsatz von Virtueller und Erweiterter Realität auch nicht physikalische Produkteigenschaften sichtbar zu machen und so adäquate Methoden und Werkzeuge für das Engineering zu entwickeln.

    Die Bereitstellung von Lösungen zur gesamtheitlichen Betrachtung komplexer Produkte oder Anlagen in der Entwicklung, dem Tests und während des Betriebes stellt die Informationstechnik vor große Herausforderungen. Unter anderem müssen unabhängig voneinander modellierte Komponenten in einen Gesamtkontext eingebracht werden, wofür die virtuelle oder erweiterte Realität als integrierter Arbeitsbereich nutzbar gemacht werden kann. Ziel des Teilprojektes "Interoperabilität für digitale Produkte mit eingebetteten Systemen" ist daher die Sicherstellung der Interoperabilität der beteiligten heterogenen Systeme und der von diesen verwalteten Modelle. Dies reicht von der syntaktischen (verschiedene Schnittstellen, Datenmodelle, etc.) über die semantische (Bedeutung und Zusammenhang von unterschiedlich modellierten Daten und Funktionalitäten) bis zur pragmatischen Ebene (Verwendung durch Nutzer, Unterstützung von Arbeitsabläufen, Kooperation).

    View project in the research portal

    ViERforES - Koordination
    Duration: 01.09.2008 to 31.12.2010

    Aufgabe des Teilprojektes ist die Koordinierung der Zusammenarbeit der Projektleiter der Teilprojekte der Anwendungsbereiche und Querschnittthemen des Projektes VIERforES sowie Präsentation, Außendarstellung.

    View project in the research portal

    ViERforES - Sichere Datenhaltung in eingebetteten Systemen
    Duration: 01.09.2008 to 31.12.2010

    Die Funktionalität neuer Produkte wird durch einen zunehmenden Anteil von Software in Form von Eingebetteten Systemen erzielt. Im Zusammenwirken mit anderen funktionsbestimmenden Komponenten komplexer technischer Systeme erfordert das neue Techologien zur Beherrschung von höchster Sicherheit und Zuverlässigkeit von Produktentwicklungen. Ziel von VIERforES ist es, durch Einsatz von Virtueller und Erweiterter Realität auch nicht physikalische Produkteigenschaften sichtbar zu machen und so adäquate Methoden und Werkzeuge für das Engineering zu entwickeln.

    Ziel des Teilprojektes "Sichere Datenhaltung in eingebetteten Systemen" ist es, den Stand der Technik bezüglich Safety und Security sowie ihrer Wechselwirkungen unter dem speziellen Fokus auf eingebettete Systeme aufzuzeigen und in der Kooperation mit Kaiserslautern auf die Anwendungsgebiete abzubilden. Bedrohungen für dieses spezifische Umfeld sollen analysiert und modelliert (z.B. unter Einbeziehung bestehender Schemata wie der CERT-Taxonomie) werden und dem Anwender über Virtual Engineering greifbar gemacht werden. Ein weiterer Schwerpunkt ist die Entwicklung einer Produktlinie für sichere Datenhaltung in eingebetteten Systemen und Konzepte für die Verfügbarkeit dieser Produktlinie im Virtual Engineering.

    View project in the research portal

    Lastbalancierte Indexstrukturen zur Unterstützung des Self-Tuning in DBMS
    Duration: 03.03.2007 to 31.03.2010

    Indexstrukturen werden seit langer Zeit in Datenbankmanagementsystemen eingesetzt, um bei grösen Datenmengen den Zugriff auf Datenobjekte zu beschleunigen. Dabei werden Datenräume in der Regel gleichmäßig indexiert, um möglichst konstante Zugriffskosten zu erzielen. Weiterhin sind die Indexstrukturen dafür optimiert, den gesamten Datenbereich zu beschreiben, wodurch in der Regel große Indexinstanzen entstehen. Im Rahmen dieses Projektes wird untersucht, welche Möglichkeiten existieren, um Indexe im Rahmen eines Self-Tuning besser an aktuelle Anforderungen eines Systems anzupassen. Im Gegensatz zur parallel betriebenen Forschungen an Indexkonfigurationen sollen hierbei die Indexe selber adaptiv sein, indem sie sich an das Lastverhalten in Form von Zugriffen auf bestimmte Datenbereiche selbständig anpassen. Resultierende Indexstrukturen müssen  dementsprechend nicht mehr höhenbalanciert sein und können gegebenenfalls dünnbesetzt sein oder den Datenraum nur partiell überdecken.

    View project in the research portal

    Methods and Tools for Construction of Highly Configurable Database Families for Embedded Systems
    Duration: 01.04.2006 to 30.09.2008

    Embedded computer systems often need infrastructure software for the management of data that often has a lot in common with traditional database management systems (DBMS). However, the hardware heterogeneity, the sometimes extreme resource constraints, and the different requirements of the often very special applications inhibit the use of standard software solutions. In this situation programmers frequently react by developing their own solutions, which leads to a "reinvention of the wheel". The goal of this project is to evaluate and improve methods and tools for the construction of highly customizable DBMS. These techniques could reduce the developments by supporting reuse without increased hardware costs. Besides the construction of DBMS families a further goal is to analyse application code in order to automate and this simply the configuration configuration process.

    View project in the research portal

    Reflective and Adaptive Middleware for Software Evolution of Non-Stopping Information Systems
    Duration: 15.10.2005 to 31.03.2008

    Today's information systems still remain far from exhibiting the levels of agility required to operate in our very volatile and competitive (socio-techno-economical) environment. Such environments require updated/new business services to be easily and rapidly offered while ensuring a high-level of quality and certification. Towards that purpose, the present proposal addresses the rigorous development of self-adapting and run-time evolving information systems. The approach we propose is mainly interaction-centric. First, a reflective middleware is to be built with a UML-compliant base-level and a meta-level with evolutionary script-based rules and consistency checking of run-time self-adaptation and evolution. This reflective middleware is then to be enhanced by endowing it with a more general (domain-dependent) architecture with reconfiguration capabilities based on graph transformation rewriting techniques and property-oriented (temporal) logic. Transformation models will then be forwarded both at the base- and at the meta-level for formal validation and properties verification of the running (middleware-based) system on the basis of the (domain-based) architecture. Besides the proof of concepts with academic case studies, the project will be validated with a non-trivial case-study dealing with an urban traffic systems.

    View project in the research portal

    Virtuelle Entwicklungs- und Logistikplattform (TP 13 Automotive)
    Duration: 01.10.2005 to 30.09.2007

    Ein ganzheitliches Virtual Engineering von der Entwicklung bis hin zur Fertigung von Produkten erfordert die Verbindung unterschiedlicher ingenieurwissenschaftlicher Disziplinen bezogen auf die Betrachtungsebenen und Detaillierungsgrade in ihren Modellwelten. Als vorteilhaft hat sich in diesem Zusammenhang die Verwendung von Modellkomponenten erwiesen, die sich an dem Modul- und Schnittstellenkonzept der Produkte orientieren. Die hierfür benötigten ingenieurwissenschaftlichen Grundlagen und Modellierungskonzepte sollen im Rahmen dieses interdisziplinären Projektes erarbeitet und anhand einer prototypischen Softwareplattform zur Unterstützung der Produktentstehungsprozesse erprobt werden. Damit soll ein Beitrag zur Weiterentwicklung virtueller Technologien bzw. zur Verbesserung von deren Anwendbarkeit bei Engineering- und Planungsprozessen geleistet werden. Projektpartner sind Prof. R. Kasper vom Institut für Mobile Systeme der OvGU Magdeburg, Prof. U. Gabbert vom Institut für Mechanik der OvGU Magdeburg sowie Prof. M. Schenk vom Fraunhofer-Institut für Fabrikbetrieb und -automatisierung (IFF) Magdeburg.

    View project in the research portal

    Lastbalancierte Indexstrukturen zur Unterstützung des Self-Tuning in DBMS
    Duration: 01.10.2004 to 02.03.2007

    Indexstrukturen werden seit langer Zeit in Datenbankmanagementsystemen eingesetzt, um bei grösen Datenmengen den Zugriff auf Datenobjekte zu beschleunigen. Dabei werden Datenräume in der Regel gleichmäßig indexiert, um möglichst konstante Zugriffskosten zu erzielen. Weiterhin sind die Indexstrukturen dafür optimiert, den gesamten Datenbereich zu beschreiben, wodurch in der Regel große Indexinstanzen entstehen. Im Rahmen dieses Projektes wird untersucht, welche Möglichkeiten existieren, um Indexe im Rahmen eines Self-Tuning besser an aktuelle Anforderungen eines Systems anzupassen. Im Gegensatz zur parallel betriebenen Forschungen an Indexkonfigurationen sollen hierbei die Indexe selber adaptiv sein, indem sie sich an das Lastverhalten in Form von Zugriffen auf bestimmte Datenbereiche selbständig anpassen. Resultierende Indexstrukturen müssen  dementsprechend nicht mehr höhenbalanciert sein und können gegebenenfalls dünnbesetzt sein oder den Datenraum nur partiell überdecken.

    View project in the research portal

    Werkzeugunterstützung für die Entwicklung von Produktlinien
    Duration: 01.10.2004 to 31.08.2006

    Produktlinientechnologien bzw. Domain Engineering stellen wichtige Methoden zur Erstellung von wiederverwendbarer, konfigurierbarer und beherrschbarer Software dar. Ziel des Projektes ist die durchgehende Unterstützung des Produktlinien bzw. Domain Engineering Prozesses. Im Moment gibt es eine Vielzahl von Methoden und Werkzeugen zur Unterstützung der einzelnen Phasen des Domain Engineering (Analyse, Entwurf, Implementierung, Konfigurierung). Allerdings besteht zwischen diesen oftmals keinerlei Zusammenhang. Dadurch können nicht alle Informationen einer Phase in eine folgende übernommen werden und gehen somit verloren. Diese Informationen fehlen bei späteren Erweiterungen, Anpassungen und Wartungsarbeiten. Deshalb m"ussen viele einmal spezifizierte Eigenschaften der zu erstellenden Software mehrmals neu eingegeben bzw. implementiert oder spezifiziert werden. Des Weiteren wird in diesem Projekt FeatureC++, eine merkmalsorientierte Erweiterung zu C++, entwickelt. Die Idee auch für die Anwendungsentwicklung mit C++ eine Sprachunterstützung  anzubieten (Hierher nur Java mit AHEAD). Weiterhin soll damit gezeigt werden, das die Entwicklungsumgebung sowie der Entwicklungsprozess unabhängig von einer Sprache (AHEAD – Java, FeatureC++ – C++) oder speziellen Werkzeugen ist. Der Prozess und die Werkzeuge folgen lediglich dem Paradigma der Merkmalsorientierung.

    View project in the research portal

    Hochkonfigierbares Datenmanagement
    Duration: 01.10.2002 to 02.03.2006

    Die Einsatzbereiche von Rechensystemen werden immer vielfältiger. Mikroprozessoren finden sich heute bereits in jedem Automobil, jedem Flugzeug und selbst in Waschmaschinen. Aktuelle Entwicklungen wie "Ubiquitous Computing" und "Pervasive Computing" werden diesen Trend noch verstärken. Häufig benötigen auch derartige "eingebettete" Rechensysteme Infrastruktursoftware zur Datenhaltung, die vieles mit klassischer Datenhaltung in DBMS gemein haben. Allerdings verhindern die Heterogenität der Hardware, die teilweise extremen Ressourcenbeschränkungen und die unterschiedlichen Anforderungen der häufig sehr speziellen Anwendungsprogramme den Einsatz von Standardlösungen. Um zu verhindern, dass Entwickler darauf mit Eigenentwicklungen reagieren bedarf es spezieller anpassbarer DBMS für die Anwendungsdomäne der eingebetteten Systeme. Das Ziel dieses Vorhabens ist es, Methoden und Werkzeuge zu evaluieren und zu verfeinern, die für den Bau anwendungsspezifisch konfigurierbarer DBMS zielführend sind. Dabei soll neben der Konstruktion der DBMS Familie auch die Analyse von Anwendungen betrachtet werden, um so den Aufwand f"ur die Konfigurierung der passenden DBMS Variante durch Automatisierung zu minimieren.

    View project in the research portal

    Relevance-Feedback
    Duration: 01.03.2002 to 01.03.2006

    Bei der Suche in Bilddatenbanken ohne textuelle Annotationen ist man von automatisch extrahierten Metadaten abhängig. Beim Relevance-Feedback erfolgt die Suche interaktiv auf den extrahierten Daten. Bei den extrahierten Daten handelt es sich um Merkmalen wie Farbe und Form. Diese so genannten low-level-Merkmale können ein gesuchtes Bild nur vage beschrieben. Daher entspricht die Ergebnismenge einer Anfrage auf diesem Datenraum in der Regel nicht genau den Vorstellungen des Nutzers.Durch mehrere iterative Schritte während eines Anfrageprozesses kann die menschliche Beurteilung mit in die Anfrageausführung einbezogen werden. Bei einer unbefriedigenden Ergebnismenge gibt es mehrere Verfahren, bei denen durch iterative Anfrageformulierung eine Verbesserung der Ergebnismenge erreicht werden kann. Ein Beispiel wäre die Bewertung der Ergebnismenge durch den Nutzer. Die bewertete Anfrage wird als neue Anfrage an das System geschickt. Eine benutzerorientierte Unterstüzung bei der Iteration ist durch eine geeignete Präsentation der Ergebnismenge möglich.

    View project in the research portal

    Ähnlichkeitsbasierte Operationen für die Integration strukturierter Daten
    Duration: 01.10.2000 to 02.08.2005

    Die Behandlung von Diskrepanzen in Daten ist immer noch eine große Herausforderung und zum Beispiel relevant zur Beseitigung von Duplikaten aus semantisch überlappenden Datenquellen als auch zur Verbindung komplementärer Daten aus verschiedenen Quellen. Entsprechende Operationen können meist nicht nur auf Wertegleichheit basieren, da nur in wenigen Fällen über Systemgrenzen hinweg gültige Identifikatoren existieren.Die Verwendung weiterer Attributwerte ist problematisch, da fehlerhafte Daten und unterschiedliche Darstellungsweisen ein häufiges Problem in diesem Kontext sind. Deshalb müssen solche Operation auf der Ähnlichkeit von Datenobjekten und -werten basieren.Dieser Probleme wird sich in dem Promotionsprojekt von Herr Eike Schallehn angenommen, indem ähnlichkeitsbasierte Operationen entsprechend einem leichtgewichtigen, generischen Rahmen bereitgestellt werden. Die ähnlichkeitsbasierte Selektion, der Verbund und die Gruppierung werden bezüglich ihrer allgemeinen Semantik und besonderer Aspekte der zugrunde liegenden Ähnlichkeitsrelationen diskutiert. Entsprechende Algorithmen für die Datenbearbeitung werden für materialisierte und virtuelle Datenintegrationsszenarien beschrieben. Implementierungen werden vorgestellt und bezüglich der Anwendbarkeit und Effizienz der vorgestellten Ansätze evaluiert.

    View project in the research portal

    Optimierung von Ähnlichkeitsanfragen in Multimedia-Datenbanksystemen
    Duration: 01.01.2003 to 01.08.2005

    Für die Suche in Multimedia-Datenbanksystemen müssen neben exakten  Ergebnissen auch solche einbezogen werden, die der gewünschten Information  möglichst nahe kommen, d.h. ähnlich sind. Eine Anfrage könnte etwa sein,  in einer Bilddatenbank die Bilder zu finden, die möglichst ähnlich zu  einem bestimmten Vorgabebild sind. Die "Ähnlichkeit" wird jedoch von  verschiedenen Faktoren, wie der subjektiven Einschätzung des Nutzers und   der Gewichtung von Teilanfragen beeinflusst. Da solche Faktoren allgemein  nicht durch ein System vorhersagbar sind, ist es notwendig, sie in die Anfragesprache des Systems zu integrieren. Dabei eignet sich zur Anfrageformulierung eine kalkülbasierte QBE-Sprache aufgrund des  deklarativen Charakters für den Anwender. Für die Anfrageverarbeitung   durch den Computer hingegegen eignet sich eine algebrabasierte Sprache  besser.Die aus deklarativen Nutzeranfragen erzeugten Algebraausdrücke stellen im  Allgemeinen nicht die bestmögliche Berechnungsvorschrift dar, so dass eine  Optimierung sinnvoll bzw. notwendig ist. Eine besondere Berücksichtigung  bei der Optimierung verlangt dabei die Behandlung der in die Sprache  eingebetteten Ähnlichkeitswerte.

    View project in the research portal

    Suche in Multimedia-Datenbanken
    Duration: 02.03.2005 to 01.08.2005

    Das langfristige Ziel ist die Erforschung der Nutzung von Datenbankkonzepten zur Verwaltung von Multimedia-Daten. Der Schwerpunkt liegt auf Methoden und Werkzeugen zur Suche nach Multimedia-Daten. Wichtige Forschungsergebnisse sollen dabei anhand von Prototypen validiert und demonstriert werden.Die Suche nach Multimedia-Daten erfordert die Spezifikation von Anfragen, welche durch den Forschungsschwerpunkt "Gewichten von Anfragen" abgedeckt wird. Dazu wurde die Anfragesprache WS-QBE entwickelt, welche eine QBE-ähnliche Anfragespezifikation von Ähnlichkeitsanfragen erlaubt. WS-QBE-Anfragen werden über eine Kalkülsprache in eine Ähnlichkeitsalgebra überführt, in der eine Optimierung und anschließend die Ergebnisberechnung ausgeführt wird. Zum effizienten Finden von Ergebnissen sind hochdimensionale Indexstrukturen notwendig. Oft kann ein Anfrageergebnis nur mittels mehrerer Anfrage-Iterationen gefunden werden. Dazu werden Konzepte des Relevance Feedbacks verwendet.

    View project in the research portal

    Parallel SQL Based Frequent Pattern Mining
    Duration: 01.01.2002 to 01.05.2005

    Data mining on large relational databases has gained popularity   and its significance is well recognized. However, the performance   of SQL based data mining is known to fall behind specialized   implementation. We investigate approaches based on SQL for   the problem of finding frequent patterns from a transaction   table, including an algorithm that we recently proposed, called   Ppropad (Parallel PROjection PAttern Discovery). Ppropad   fundamentally differs from an Apriori-like candidate set   generation-and-test approach. This approach successively   projects the transaction table into frequent itemsets to avoid   making multiple passes over the large original transaction table and   generating a huge sets of candidates.   We have built a parallel database system with DB2 and made   performance evaluation on it. We prove that data mining with   SQL can achieve sufficient performance by the utilization of database   tuning.

    View project in the research portal

    Indexunterstützung für Anfrageoperationen in Mediatorsystemen
    Duration: 01.04.2003 to 31.03.2005

    Viele Benutzer und Applikationen benötigen die Integration von semi-strukturierten Daten aus autonomen, heterogenenen Web-Datenquellen. In den letzten Jahren entstanden Mediator-Systeme, die Domain-Knowledge in Form von Ontologien oder Vokabularen benutzen, um das Problem der strukturellen Heterogenität zu lösen. Allerdings haben viele Anwender nicht das notwendige Wissen über Daten und deren Struktur sowie über die Anfragesprache, um diese Daten sinnvoll zu nutzen. Somit ist es notwendig einfach zu benutzende Anfrageschnittstellen, d.h. Keyword-Suche und Browsing, bereitzustellen.Das Ziel des Projektes ist eine indexbasierte Realisierung von Keyword-Suchen in konzeptbasierten Mediatorsystemen. Um globale Anfragen effizient auszuführen, wird ein Index auf der globalen Ebene aus Anfrageergebnissen aufgebaut und aktuell gehalten. Zusätzlich sollen neben Stichwortanfragen auch Stringähnlichkeitsoperationen unterstützt werden.

    View project in the research portal

    Selbstverwaltung von Indexkonfigurationen in DBMS
    Duration: 01.04.2003 to 31.03.2005

    Ein Hauptmittel zum Tuning von Datenbanken ist das Anlegen von Indexen zur Beschleunigung der Ausführung einer Vielzahl von Operationen. Jedoch ist das Anlegen der geeigneten Indexe eine schwierige Aufgabe, die genaues Wissen über die Nutzung der Daten und die Arbeitsweise des jeweiligen Datenbankmanagementsystems voraussetzt. Zur Unterstützung dieser Aufgabe wurden in den letzten Jahren von den DBMS-Herstellern Werkzeuge entwickelt, die zum Beispiel typische Anfragen oder Anfrage-Logs analysieren und eine statische Empfehlung für eine Indexkonfiguration ableiten.In der Praxis existieren Datenbanken aber in einem sehr dynamischen Umfeld, wo sich neben typischen Nutzungsprofilen (Anfragen) auch die Daten selber und ebenfalls zur Verfügung stehenden Systemressourcen permanent ändern. Im Rahmen dieses Projektes wird untersucht, wie basierend auf einer kontinuierlichen Analyse des Systems und seiner Nutzung automatisch die aktuelle Indexkonfiguration an sich ändernde Anforderungen angepasst werden kann.

    View project in the research portal

    Konsistenzsicherung bei serverseitigen Änderungen für Datenbestände mobiler Clients
    Duration: 01.11.2004 to 01.03.2005

    Informationssystemen mit mobilen Klienten müssen die Restriktionen bei Hardware (leichtgewichtige Endgeräte), Energieversorgung (meist Akkumulatoren) und Netzwerknutzung (Kosten, Geschwindigkeiten, Verfügbarkeit) kompensieren. Oftmals kommen hierbei Techniken zum  Einsatz, welche Daten redundant auf dem Mobilgerät speichern. Das Spektrum reicht hierbei von Caching über Hoarding bis hin zur Replikation. Sie unterscheiden sich im Wesentlichen durch das Vorgehen, wie der Nutzer Einfluss auf die zwischenzuspeichernden Daten  nehmen kann. Beim Caching, insbesondere beim semantischen Caching, werden Anfrageergebnisse gepuffert und bei neuen Anfragen gegebenenfalls wiederverwendet. Hoarding-Techniken versuchen vorauszuahnen, welche Daten dem Nutzer eines Mobilgerätes später von Nutzen sein können. Replikationsverfahren erlauben ein gezieltes Anfordern von Daten. Bei allen drei Ansätzen wird jedoch eine künstliche Redundanz der Serverdaten erzeugt, die bei Änderungen zu Inkonsistenzen führt. Daher müssen sowohl Klient, als auch Server konsistenzsichernde    Maßnahmen unterstützen. Im Rahmen dieses Projektes wird daher untersucht, wie ein solcher Abgleich, abhängig vom gewählten  Zwischenspeicherungsansatz erfolgen kann.

    View project in the research portal

    Suche in Multimedia-Datenbanken
    Duration: 02.03.2000 to 01.03.2005

    Das langfristige Ziel ist die Erforschung der Nutzung von Datenbankkonzepten zur Verwaltung von Multimedia-Daten. Der Schwerpunkt liegt auf Methoden und Werkzeugen zur Suche nach Multimedia-Daten. Wichtige Forschungsergebnisse sollen dabei anhand von Prototypen validiert und demonstriert werden.Die Suche nach Multimedia-Daten erfordert die Spezifikation von Anfragen, welche durch den Forschungsschwerpunkt "Gewichten von Anfragen" abgedeckt wird. Dazu wurde die Anfragesprache WS-QBE entwickelt, welche eine QBE-ähnliche Anfragespezifikation von Ähnlichkeitsanfragen erlaubt. WS-QBE-Anfragen werden über eine Kalkülsprache in eine Ähnlichkeitsalgebra überführt, in der eine Optimierung und anschließend die Ergebnisberechnung ausgeführt wird. Zum effizienten Finden von Ergebnissen sind hochdimensionale Indexstrukturen notwendig. Oft kann ein Anfrageergebnis nur mittels mehrerer Anfrage-Iterationen gefunden werden. Dazu werden Konzepte des Relevance Feedbacks verwendet.

    View project in the research portal

    Softwaretechnische Methoden zur Entwicklung adaptiver verteilter Systeme
    Duration: 01.12.2002 to 31.12.2004

    Im Kontext der globalen Vernetzung gewinnen verteilte Systeme immer mehr an Bedeutung. Sie durchdringen immer mehr Bereiche des alltäglichen Lebens undmüssen immer flexibler auf äußere Einflüsse reagieren bzw. hinsichtlich dieser angepasst werden. Ziel dieses Promotionsvorhabens ist der wachsenden Komplexität dieser Systeme unter Beachtung des immer breiter werdenden Spektrums von potentiellen Anwendungen und Zielplattformen mittels moderner softwaretechnischer Methoden zu begegnen. Hierbei werden vor allem Aspektorientierte, Generative sowie Merkmalsorientierte Programmierung hinsichtlich Anpassbarkeit, Widerverwendbarkeit und Erweiterbarkeit von verteilten Systemen, ohne die Verständlichkeit und Wartbarkeit einzuschränken, untersucht. In diesem Rahmen wurden und werden außerdem neue Methoden wie konfigurierbares Binden, die kombinierte Anwendung der genannten Sprachparadigmen oder eine visuelle Werkzeugunterstützung entwickelt. Neben dieser statischen Sicht liegt der Fokus außerdem auf der dynamischen Anpassung von verteilten Systemen zur Laufzeit. In diesem Zusammenhang werden auf softwaretechnischer Ebene reflexive Architekturen und dynamisches Aspektweben untersucht. Auf konzeptioneller Ebene wird ein Zusammenhang zwischen Komplexitätsforschung, Kybernetik und selbst-organisierenden adaptiven dezentral-verteilten Systemen hergestellt.

    View project in the research portal

    Adaptive Replikation von Daten in heterogenen mobilen Kommunikationsnetzen
    Duration: 01.11.2000 to 31.10.2004

    Moderne Kommunikationsnetze mit mobilem, drahtlosem Zugang eröffnen eine Vielzahl neuer Anwendungsgebiete. Die Mobilität der Endgeräte sowie die Ausdehnung der Netzwerke erfordern eine verteilte und redundante Verwaltung sowohl der Managementdaten als auch der eigentlichen Nutzdaten, um einen reibungslosen Betrieb sowie einen effizienten und kostengünstigen Zugriff zu gewährleisten. Daraus resultiert jedoch gleichzeitig auch die Notwendigkeit einer konsistenten Aktualisierung der einzelnen Kopien der Daten. Erschwert wird dies gleichzeitig durch die Heterogenität der Netze und der darauf aufbauenden Systemdienste, die durch die Vielfalt von Technologien und Betreibern bedingt ist. Gegenstand dieses Projektes sind daher Problemstellungen der Datenhaltung in heterogenen, mobilen Netzen. Ausgehend von der Analyse konkreter Anwendungsszenarien und sich daraus ergebender Möglichkeiten werden Replikationstechniken vor allem hinsichtlich der Anpassbarkeit (Adaptivität) an veränderte Rahmenbedingungen untersucht, wie Änderungen der Netztopologie, der Verfügbarkeit einzelner Knoten oder Netzsegmente sowie Veränderungen des Verhaltens bei der Datennutzung.

    View project in the research portal

    Integrating techniques of software specification in engineering applications
    Duration: 01.01.1999 to 31.12.2003

    Die Ablaufsteuerung vieler ingenieurwissenschaftlicher Anwendungen kann nur unvollständig durch Software realisiert werden. Äußere Einflüsse und menschliche Interaktionen ("offene Systeme") verhindern dies. Weiterhin müssen die spezifizierten Abläufe flexibel an neue Anforderungen und Rahmenbedingungen anpaßbar sein. In Abhängigkeit von der Flexibilität der beschriebenen Prozesse müssen Ablaufbeschreibungen häufig im laufenden Betrieb angepaßt werden. Daraus ergeben sich neue Anforderungen an Softwarespezifikationen, die von klassischen Methoden der Informatik nur unvollständig abgedeckt werden. Zielstellung ist die Erstellung einer Spezifikationssprache und -methode für ingenieurwissenschaftliche Anwendungen mit  Eigenschaften wie der Verwendung einer verbreiteten Notationen, der Zuordnung von Abläufen zu Objekten, einer hierarchischn Verhaltensverfeinerung, hoher Adaptierbarkeit und Flexibilität, einer guten Analysierbarkeit sowie der Generierbarkeit operationaler Abläufe. Die Spezifikationsmethodik soll anhand der Abläufe in einer konkreten Materialflußanlage mit Hilfe von Testfällen überprüft werden. Dabei soll insbesondere die Adaptierbarkeit von Ablaufbeschreibungen im Vordergrund stehen.

    View project in the research portal

    MuSofT - Multimedia in Software Engineering
    Duration: 01.03.2001 to 31.12.2003

    The goal of the MuSofT project is the support of the education within the area of software engineering through the application of "new media". Software engineering is a major part of the curricula in Computer Science, Business Computer Science, and courses with computer science as a minor subject. Further, software engineering becomes more and more important in Engineering and Information Technology courses. The education in these application oriented areas of computer science shall be supported by the development of multimedia materials, thus providing the necessary quality of the education even with large and very large numbers of students.
    Within the sub-project 1.2 a learning unit concerning the development of information systems will be developed. The main focus will be on the area of databases, which are a major part of many modern software systems. The lectures impart practical knowledge about the design of databases as well as aspects from database theory. For the different phases of the database design process, materials will be developed which support lectures, seminars, and practical work as well as self study.

    View project in the research portal

    Internet-Datenbank für kriegsbedingt verbrachte Kulturgüter
    Duration: 01.10.1999 to 01.10.2003

    Ziel des Projektes ist die Konzeption und Realisierung einer Datenbank zur Verwaltung von kriegsbedingt verbrachten Kulturgütern (Beutekunst). In diesem Rahmen ist eine WWW-Schnittstelle zu entwickeln, die eine Recherche anhand verschiedener Kriterien ermöglicht und Aspekte der Abrechnung von Anfragen berücksichtigt.

    View project in the research portal

    Federation and Integration Services for Information Fusion
    Duration: 01.01.2000 to 31.03.2003

    The management of large data sets, guaranteeing of the actuality and consistency as well as retrieval of data are main features of informationsystems that are implemented in various areas of enterprises. Due to the globalization of markets the need to use actual worldwide distributed information increases. The character of this data - heterogenity, structure, redundance and inconsistence - makes the integration with own data more difficult. At the same time the great amount of data requires suitable precautions for filtering and condensing and for extraction of relevant information.The variety of potential data sources and data structures, different requests for information (for example concerning consistence, actuality and availability), the support of user-related fusion and analysis methods as well as scalability presuppose a flexible and extensible infrastructure. Methodologies and techniques of such an infrastructure shall be developed as generic kernel for efficient aplications to support the information fusion.

    View project in the research portal

    Föderierungsdienst für heterogene Dokumentenquellen
    Duration: 01.09.1999 to 31.12.2001

    Ziel dieses Pojektes ist der Entwurf und die Implementierung eines Föderierungsdienstes zur Literatur- und Informationsrecherche in heterogenen Informationssystemen. Eine derartige Komponente ist notwendig, da im Anwendungsszenario des bundesweiten Projektes Global-Info heterogene und autonome Informationssysteme zusammengefaßt werden müssen, die in der Regel verteilt im Netz agieren und deren lokale Eigenschaften nicht beeinflußbar sind. Der Föderierungsdienst schließt dabei auch die Verwaltung von Metadaten der Föderation in einer Datenbank mit ein. Wesentliche Teilprobleme sind weiterhin Methoden zur Extraktion von Metadaten aus teilweise strukturierten Dokumenten und zur Erkennung identischer Informationsobjekte (Dokumente, Autoreninformationen, etc.).

    View project in the research portal

    FIREworks: Feature Integration in Requirements Engineering
    Duration: 01.05.1997 to 30.04.2000

    FIREworks addresses the problem of adding features to specifications of complex software products, inparticular software for telecommunication services and banking. It will provide a feature-orientedapproach to software design including requirements specification languages and verification logics, aswell as a method for their usage. The aim is to provide a method with which companies can buildproducts by taking an existing product and adding, removing, or respecifying some features.

    View project in the research portal

    Tools and Components for efficient Development and practical Implementation of Federated Database Systems
    Duration: 01.03.1998 to 28.02.2000

    This project focusses to the design of federated database systems. These systems primeses database integration with local autonomy. This means, that integrated databases can used be legacy applications.

    View project in the research portal

    Federating Heterogenous Database Systems and Local Data Management Components for Global Integrity Maintenance
    Duration: 01.09.1995 to 28.02.1998

    Ziel des Vorhabens ist die Entwicklung einer Basis-Informationsinfrastruktur als Grundlage integrierter und einheitlicher Datenhaltung füralle Phasen der Fabrikplanung. Dazu soll ein föderiertes heterogenes Informationssystem entstehen, das als Rahmensystem zur Integrationaller an der Fabrikplanung beteiligten Software-Werkzeuge einschließlich deren lokaler Datenbestände dient. Damit sollen bislangseparate Werkzeuge, die für Produktentwurf, Produktionsplanung und Fabriksimulation eingesetzt werden oder dafür noch entwickeltwerden, synergetisch zusammengefügt werden. Zentrale Aufgabe eines solchen föderierten Informationssystems ist neben der Bereitstellung einer homogenen Datenbankschnittstelle fürglobale Anwendungen die systemübergreifende Gewährleistung der Datenkonsistenz. Um diese zu gewährleisten, sollen aktiveMechanismen auf einer übergeordneten Ebene realisiert werden, um die Einzelsysteme mit ihren unterschiedlichen Möglichkeiten derIntegritätssicherung zu verbinden. Die Verbindung der Einzelsysteme verlangt die Integration der verschiedenen Datenschemata, welcheoft auf unterschiedlichen Datenmodellen basieren.

    View project in the research portal

    Wissenstransfer im Bereich Datenbanktechnologien


    Datenmanagement

    • in der Cloud
    • auf neuer Hardware (CPU, GPU, …)

    Self-Tuning Ansätze


    Bereitstellung von Softwaretechniken für Entwickler

    • Konfigurierbare Software (Software-Produktlinien, Multi-Produktlinien)
    • Wartbarkeit von Software (Refaktorisierung)
    Die Forschung von Gunter Saake umfasst die Themengebiete Datenbanken und Softwaretechnik. Im Bereich Datenbanken befasst sich Gunter Saake insbesondere mit Datenbankoperationen auf moderner Hardware, dem konzeptuellen Design von Datenbank-Applikationen, mit Anfragesprachen für Datenbanken und mit analysezentrierten Datenbanken. Im Bereich Softwaretechnik treibt Gunter Saake die Untersuchung von Techniken voran, welche es erlauben, Programme modular zu entwickeln und zu konfigurieren. Diese Techniken umfassen Präprozessoren, Objekt-orientierte Programmierung, Aspekt-orientierte Programmierung und Feature-orientierte Programmierung. Zusätzlich engagiert sich Gunter Saake in der Entwicklung von Techniken für die virtuelle Realität.
    Schotter

    Last Modification: 28.07.2021 -
    Contact Person: Webmaster