Start |  deutsch |  Contact | Internal |  KIT
Contact
Prof. Dr. Wolfgang Karl

Haid-und-Neu-Str. 7
76131 Karlsruhe
Germany

Tel.: +49 721 608-43771
Fax: +49 721 608-43962

E-Mail: karl@kit.edu

Research Projects

Hardware-based approach for big data

Hardware-based approach for big data Nowadays in the era of information a lot of data are generated. These data are called Big data generated data. Data are coming from social networks, telephone companies, online retailers or from sensors in the field of Internet of Things. This gathered data include a huge knowledge. But it is not an easy task to get a deep inside to this knowledge. Not only the huge amount of data is a problem. Also the variability and the plausibility of the data is a great problem for analyzing the data. Currently we investigate how special designed hardware architectures can support and accelerate data acquisition, data filtering and data analyzing. Algorithms of graph theory and machine learning are used for analyzing big data. There are not only graph-based data and textual-based big data. Also a huge amount of multimedia data are generated. For analyzing such different data types designing FPGA-based hardware architectures are an interesting approach to rule the complexity of big data. One aim of this project is to design a novel memory management for in-memory databases. In-memory databases are currently a hot research topic. These databases allow a fast access of records in the database. Special designed hardware architectures can support the memory management of the host processor.

Hardware acceleration for computational biology applications

Hardware acceleration for computational biology applications The focus of this project is designing hardware architectures for applications from the computational biology field. For designing such special hardware architectures systems from Convey Computer or Maxeler are well suited. These systems are available at our chair. Such heterogeneous FPGA systems combine utilization of standard processors with user-programmable FPGAs. These FPGAs act as coprocessor for accelerating parts of an algorithm. The host processor can still working on other parts of the algorithm. For accelerating the search for homologous protein sequences in a great database we are designing an coprocessor for a fast prefiltering of the database. We also investigate how we can fully leverage the heterogeneity of the Convey HC-1 for accelerating the tool HHblits.

KIT Lehre hoch Forschung

KIT Lehre hoch Forschung Within the project "KIT Lehre hoch Forschung" funded by the German Federal Ministry of Education and Research (BMBF), the Chair of Computer Architecture and Parallel Processing by Prof. Karl offers the course "Project-oriented Software Laboratory (Parallel Numerics)". Current problems of actual research activities of the Chair in the various areas such as parallel programming with different programming models (such as MPI, OpenMP, CUDA, OpenCL) are treated in the course. For example, research-oriented applications from different fields of fluid mechanics are considered. To solve these problems, modern mathematical techniques are taught and the use of high performance computing in practical examples is explained. In small groups, project-based problems are processed, the results are presented in a report and presented in talks at the end of the course.

TM-Opt

TM-Opt The DFG-funded project "TM-Opt" researches methods and strategies to analyze, rate, and optimize the runtime behavior of Transactional Memory applications. The analysis aims to uncover conflicting accesses from transactions executed by concurrently running threads. The gathered information is used in an optimization phase to optimize the conflict potential of competing transactions in order to improve the runtime behavior. The research project complements the state of the art in Transactional Memory research.

Self-aware Memory (SaM)

Self-aware Memory (SaM) Self-aware Memory (SaM) is a decentralized and autonomously self-optimizing memory management system for scalable many core architectures with high dynamic application scenarios, in order to increase the overall system flexibility, dependability and scalability. Research is done on scalable and dynamic allocation of private and shared memory, efficient decentralized synchronization techniques, transactional memory support and especially on autonomous memory self-optimization e.g. locality optimization.
In addition to the memory management, a decentralized management for allocation of compute resources is investigated.

HALadapt

HALadapt Exploiting heterogeneous parallel systems poses a new challenge for application developers. Due to the diversity of such systems, statically choosing processing units for compute kernel execution can cause low performance and high energy consumption or even inhibit the execution in case the required unit is not present.
In this project, we research light-weight concepts and online-learning mechanisms that autonomously analyze the system environment, including competing applications, and adapt execution accordingly during application runtime in order to increase application and overall system performance, dependability and power efficiency.

Acceleration of a shallow water simulation for operational application

Acceleration of a shallow water simulation for operational application For disaster management, an existing and verified shallow water simulation is used to predict the flow of water in case of dam breaks or floodings. As, in case of an emergency, the results of such a simulation are required as fast as possible, the usage of high-performance clusters is not feasible as they cannot be used on demand.
In order to receive the results in reasonable time on available commodity hardware, the goal of this project is to research mechanisms that automatically accelerate the simulation using: a) dynamic regional simplification of the numerical equations and b) effective exploitation of modern heterogeneous parallel systems.

Self-Organizing and Self-Optimizing Many-Core Architectures

Self-Organizing and Self-Optimizing Many-Core Architectures This research project investigates the usage of self-organizing or Organic Computing principles within dynamically reconfigurable many-core architectures. Goal of this project is hiding the complexity of such architectures to the user and easing management and efficient utilization. By using the novel Digital on-Demand Computing Organism (DodOrg) as evaluation platform, research in this project covers all areas of self-organizing systems, ranging from system monitoring up to the realization of a self-optimizing and proactive system behavior.
The DodOrg project is a joint research project and is pursued by 4 cooperating chairs from 3 institutes. It is founded through the DFG Priority Program 1183 "Organic Computing".

GCC for Transactional Memory

GCC for Transactional Memory The simplified synchronization with Transactional Memory depends on the availability of a compiler that supports TM. For a widespread acceptance and usage of TM, a free and platform independent compiler is mandatory. This gap is addressed by a collaboration in the context of the European Network of Excellence on High Performance and Embedded Architecture and Compilation HiPEAC with Prof. Albert Cohen (INRIA Saclay, France). This collaboration aims at a robust and stable implementation of support for TM in the GCC compiler suite.