The current research focus of the Department of Telematics/Computer Networks can be briefly summarized by the keywords
can be characterized. In all areas, the goal is to develop and prototype new protocol mechanisms and implementation concepts, as well as to determine parameters from practical use.
With the increasing integration of information and communication systems into almost all areas of private, social and business life, modern information societies are becoming increasingly dependent on the availability and correct functioning of the communication infrastructures on which these services are based. In this context, deliberate sabotage attacks on basic communications or system services are proving to be an ever greater threat. Between the years1989 and 1995, for example, the number of incidents reported to the so-called Computer Emergency Response Team (CERT) increased by 50% annually. A 1999 study by the U.S. FBI reported that 32% of the agencies participating in the study had experienced sabotage attacks on their systems in the previous year. This situation is worsening: studies from 2000 indicate that attackers are increasingly using specific attack tools to set up and coordinate distributed attacks from a large number of systems. The latter category of attacks is referred to as "Distributed Denial of Service" (DDoS). In addition to these sabotage attacks, which have become a focus of interest in the past five years, "conventional attacks" such as eavesdropping, replaying and modifying data, as well as unauthorized service use, continue to pose a serious threat to our communications infrastructure.
The high risk potential of attacks for current and future network infrastructures and the growing dependence of our modern information society on the availability of these networks thus result in a constantly increasing overall threat that must be adequately countered. This applies to an even greater extent to the standardisation of the communication protocols used, as is currently being sought through the increasing introduction of IP-based components in the network infrastructure (see, for example, future releases of the UMTS standards). There is thus an urgent need to carry out systematic threat analyses and to develop a coordinated catalogue of measures which will make it possible to effectively counter conventional attacks on the confidentiality and integrity of transmitted data as well as deliberate sabotage attacks, and which will include both cryptographic measures (encryption, integrity assurance, authentication, "client puzzles", etc.) and network technology measures (traceback, packet filtering, intrusion detection, active network technologies, etc.). In doing so, however, it must be ensured that the performance characteristics of the communication services are not unduly impaired. In particular, it must be investigated how appropriate protection mechanisms can be integrated into communication infrastructure systems in such a way that the quality of service (QoS) requirements can still be met.
In the future, network-based systems will provide uniform access to data of different origins on the basis of standardized formats (e.g., XML) and thus offer extended possibilities for linking data of different origins with each other in order to access new information. In this context, innovative approaches for the realization of distributed applications (e.g.agent-oriented programming) are increasingly used, which lead to more complex interaction patterns between individual system components as "client-server". The "complete logical meshing" of distributed applications that has prevailed up to now will therefore be replaced by more complex connection topologies. From the point of view of network security, this raises a number of questions: What threats will arise from the increased routing at the application level in the future? How can users be enabled to define access rights to "their information" according to their own ideas? How can the availability of the systems be guaranteed in the event of potential Denial of Service attacks? All in all, it has to be answered how cooperative systems of information processing can be constructed in a way that they are reliable and fault-tolerant, operable and manageable, powerful and (last but not least!) secure.
However, numerous experiences of recent years show that the diversity and usually poor quality of software is also increasingly proving to be a significant problem of network-based IT systems. In 2000, CERT registered 1,090 vulnerabilities in software systems. This figure rose to 2,437 and 4,129 in 2001 and 2002, respectively, and for 2003, with 3,784 vulnerabilities discovered, is only slightly below the previous year's figure, although this does at least indicate a trend reversal.
In the summer of 2003, for example, the experience with the so-called W32/Blaster worm showed that the rapid closure of discovered security holes via software patches often proves illusory in practice. Although the attack only started on 11 August 2003 and was based on a security vulnerability for which a patch had already been made available by Microsoft since 16 July (i.e. about a month earlier), the W32/Blaster worm was able to spread surprisingly quickly and infect a large number of systems. This example also points to an alarming trend in the speed at which computer attacks are spreading. According to a study by Weaver, it is possible to construct hypervirulent active worm software capable of infecting all computer systems connected to the Internet at any one time and susceptible to a specific vulnerability within 15 minutes. According to the estimates of another study, very compact software worms can even infect all systems sensitive to a specific vulnerability in less than 30 seconds under certain circumstances. Thus, it becomes apparent that, on the one hand, the reduction of vulnerabilities will be an increasing challenge for software engineering in the future, and, on the other hand, a number of qualitatively new questions (see above) will arise for the design of distributed and network-based systems.
At the Chair of Telematics/Computer Networks of the TU Ilmenau these questions shall be addressed. The goal of the work planned in this context is to gain experience in the design and implementation of innovative distributed applications and to develop new methods for the safety-oriented engineering of network-based systems. This work is complementary to the other work described here, which focuses more on the lower, transport-oriented layers of the OSI model, and thus represents a valuable addition to the targeted research spectrum.
In the field of telecommunications networks, a large number of developments have taken place in recent years that have led to the cost-effective provision of a sufficiently high bandwidth in the fixed network area for most applications and to the ubiquitous availability of mobile communication services. In the case of mobile communications networks, the situation is currently still that either a sufficiently high bandwidth with limited mobility and QoS support (WLAN) or sufficient mobility and quality of service support with limited bandwidth (GSM, UMTS) is available; however, efforts that have already begun to integrate these network concepts should at least enable a seamless transition between the mutually complementary technologies in the future, although a number of issues still need to be resolved in this context with regard to the integration of security concepts and the efficient and coordinated realization of handovers between different access networks. Qualitatively, this development is accompanied above all by the beginning of the convergence of both protocols of classical telecommunications and the Internet protocol family as well as fixed network and mobile communications, whereby it is to be expected that this convergence process will continue in the future.
Parallel to these developments, a trend from purely human-related communication (e.g. telephony, Internet use) towards machine-driven communication has continued to emerge in recent years, in order to implement numerous monitoring and control applications on this basis. A central development in this context are wireless sensor networks, in which numerous relatively primitive sensor nodes monitor certain environmental parameters and are supposed to forward them via wireless transmission to interested (and authorized!) systems. The considerable resource restrictions and specific operating conditions for sensor nodes that prevail here require new concepts and protocols in the area of communication protocols and network security in order to enable the economically justifiable, energy-technically feasible and at the same time secure (!) operation of such networks.
Not only because of their expected great economic importance in the future, but above all because of the technical challenges they pose, the issues outlined above are to occupy a central position in the research program of the Chair of Telematics/Computer Networks.
Parallel to the constantly growing bandwidth available to end users, the demand for multimedia services is increasing. When using the conventional client-server communication model, the provision of these services leads to multiple redundant transmission of data and a very high network load for the service provider. The resulting high costs prevent potential providers of interesting content from making a multimedia service available.
In order to overcome this obstacle, various different methods have been proposed and implemented, which, however, are also regularly not an option, especially for providers with a limited financial budget. These include network multicast, a solution at the network layer level that could make redundant transmission completely avoidable, but does not scale due to the fact that a state must be maintained in each intermediate network device for each communication group. Other common solutions are the provisioning by means of server farms or "Content-Delivery-Networks" (CDN), which, however, involve a not inconsiderable financial effort.
A promising solution, which does not require any additional infrastructure and uses the resources available at the users, is application layer multicast, which is often also referred to as cooperative multimedia streaming. After initial attempts to use the P2P system architecture to distribute multimedia data, a broad body of research has now developed in this area. In the development of a cooperative streaming system, some basic features can be identified, which individual research projects focus on improving. In particular, these include resource localization, a neighborhood selection, routing, load balancing, scheduling, and buffering. In order to be able to compare the different strategies and also test them in combination, a framework, IlmStream, was developed at the Department of Telematics/Computer Networks, which makes it possible to implement various algorithms developed for these functions and to integrate them into the otherwise complete streaming system.