Conference Program Thursday February 15
Speaker: Mikael Holmqvist, Sun Microsystems AB
Jiro technology brings the benefits of industry-defined standards and intell-igent network connectivity to storage management. Jiro technology offers a proven environment for developing management software that can be deployed in diverse, distributed networks, regardless of underlying operating systems or hardware.
For developers, this translates into faster design cycles, lower development costs and more opportunity to focus on adding valuable functionality to management applications.
For enterprise IT managers, using applications and storage resources enab-led with Jiro technology will result in improved resource management and utilization, lower costs, and greater control over critical information systems.
Speaker: Erik Möller, VERITAS Software AB
Erik Möller is Product Marketing Manager in the Nordic Countries. Erik Möller has earlier been working as a product specialist at VERITAS Software AB. He has been working with support and installation of company critical systems in five years.
In today’s economy, information is the fuel that drives business. Ensuring high availability for applications and data is critical for success. What is required to meet the demands of 24x7 availability, 99,999% uptime and an information flow that doubles every year? In this seminar, we will describe how to build and deploy HA and SAN solutions to meet these demands. We will also take a close look at the clustering technology, describe how it works and the requirements on other parts of the system. We will discuss a “layered” approach to managing availability and quality of service within data centers – providing you with a model for attaining the levels of availability or performance your business operations require, even in rapidly growing, changing environments.
Hardware is getting faster and cheaper all the time which makes it more attractive to build supercomputers and high-availability solutions using commodity hardware instead of buying expensive solutions from classic UNIX vendors.
By building a GNU/Linux based cluster we wished to achieve three goals:
We will in our presentation discuss the architecture, the implementation and daily administration of a high-availability cluster based on the GNU/Linux operating system and commodity hardware. Moreover, we will report per-formance measurements and we will outline our procedures for upgrading system software including the kernel.
Our experiences with commodity hardware cluster, Linux and free software are gathered from our webhosting facility located in Copenhagen, Denmark. We have migrated from SGI Origin 200 servers with an external RAID system to a cluster of large personal computers without any special hardware or software. The cluster has currently 12 nodes, and each node in the cluster consists of an Athlon processor running at 800 MHz, 768 MB memory and 300 GB disk space.
Netgroup A/S hosts more than 250 web sites containing more than 50 GB of data. The traffic is excess of 100 GB per day. A number of web sites are dynamically generated and they depend heavily on access to the relational database MySQL.
Authors: Jon Baldock, EMEA Office Solutions Intel Corporation UK Ltd and Craig Duffy, Principal Lecturer, Bristol University, West of England
Over the last decade the growth of e-commerce, along with the extensive use of the Internet and Intranets have pushed the security of computer networks to the forefront. This paper looks at a rather neglected but very important area: malicious passive protocol analysis, or sniffers as they are commonly called. The paper reviews the role of sniffers in network security breaches and examines the reasons they are so difficult to combat. The paper then goes on to outline a novel suite of tools based on various approaches to detecting, or at least limiting the search space for, protocol analyzers. The proposed suite of tools is compared with some currently available tools and improvements are suggested. Finally the paper reviews the outlook for defenses against the malicious use of protocol analysis, reviewing the various strategies that could be employed from hardware approaches, through changes in network topologies, to protocol encryption and authentication. The likely impact of IP Next Generation (IPng/IPv6) is discussed in detail. The paper concludes that more can be done to combat protocol analysis but appropriate pro-active net-work security will require a combination of a large number of different tools and techniques.
Authors: Catharina Candolin, Janne Lundberg and Hannu H. Kari, Helsinki University of Technology
An ad hoc network is a collection of nodes that do not need to relay on a predefined infrastructure to keep the network connected. The nodes may vary in size, battery power, mobility patterns, functionality etc. Although the basic assumption in ad hoc networking is that most nodes participate in network operations, there might still exist nodes that are not able to offer the network any functionality. For example, a sensor may very well perform its task of collecting data and transmitting it to a larger station, but to ask the sensor to participate in routing and network management as well as requiring information for its own usage might already be too much. We con-sider the ad hoc networks discussed in this paper to be administered by various organizations. When an organization wishes to use services of another organization, it might not be willing to reveal any information regarding its internal structure to the other organization, However, in order for the other organization the offer any services, it must be able to authorize the service requests.
The main focus of this paper is to discuss a model of authorization especially suited for mobile ad hoc network, which wishes to preserve the privacy of its internal structure. Our solution is based on SPKI certificates, certificate chains, and proxy agents performing certificate transformation and retriev-ing services. When a node in the ad hoc network requests a service, the proxy agent will use its own identity to retrieve the service on behalf of the node, thus hiding the requesting node from the service provider as well as the service origin from the requesting node. The certificate chain has, in a sense, been cut, and two virtual certificate chains have been created. Although several solutions for establishing trust using certificates exist, we argue that solutions meets the needs of mobile ad hoc network better than existing solutions, since our model takes into consideration the fact that nodes may differ in capacity. Our model has the advantage of preserving privacy of the node internals to network while at the time hiding the origin of service from the nodes. Also the idea of using certificate transformation has not been considered in the previous solutions.
Author: Brian Pawlowski, Network Appliance Inc (see Th2)
A user’s perceived performance of a distributed file system is defined by a complex interplay of client platform, network architecture and choice of server. Within each domain, many variables can effect the achievable throughput for even a set of simple benchmarks. However, broad patterns emerge in the performance capacity of clients as a function of CPU and multiprocessing support.
This paper presents a survey of the performance of several client configurations and presents a normalized comparison based on the results of several benchmarks using the Network File System as a basis for comparison. A comparison of clients (Solaris, FreeBSD and Linux) provides a view of current state of the art.
The effects of latency on throughput are discussed with some examples. A model is proposed to interpret the limits of paralleling techniques like multithreading in increasing throughput. The focus is primarily on Gigabit Ethernet performance, though the limit of the use of slower network tech-nologies is covered. Factors affecting performance are described with suggested areas for improvement mapped out. Pitfalls in tuning are touched on.
Finally, the implications of remote file access performance for today’s clients are summarized in the light of emerging technologies like DAFS and NFS Version 4.
Author: Harald Skardal, Network Appliance Inc
Harald Skardal is a Sr. Consulting Engineer with Network Appliance Inc. He is the technical editor for the next version of NDMP, version 4, and he is currently leading the work on the company’s data management strategy.
Brief history and current status; NDMP version 4: Where we are; Becoming an IETF standard. NDMP and NDMP extensibility: NDMP as a platform for data management: Accessing vendor specific functionality through NDMP.
Authors: Pasi Eronen, Helsinki University of Technology and Jonna Särs, Nixu Ltd
DNS has long been a good example of the lack of security in the basic Internet infrastructure. It is a critical service, but was originally not designed to resist active attacks. The DNS security extensions were defined to combat the problems: they provide data integrity and authentication using digital signatures, and optional authentication of transactions (requests and replies).
Another new feature of DNS is the possibility to dynamically update DNS data (RFC 2136). This can be used to update DNS records of hosts with dynamic IP addresses, for example. DNS dynamic updates can be protected using the DNSSEC transaction signatures, or the TSIG mechanism.
It is important to notice that there are really two separate DNS use cases with different security requirements. Querying for data requires data authentica-tion but not necessarily authentication of messages. Dynamic updates require transaction authentication and also authorization, i.e. a way to specify who is allowed to change what.
So far, there have not been any good proposals for expressing authorization in this context. Existing solutions usually use local configuration files, which are essentially a form of access control lists. We see several problems in this approach. For example, a name server is not necessarily operated by the same party which actually owns the zone (and should be responsible for deciding who can change it).
In this paper, we propose a solution for authorizing DNS dynamic updates, based on the decentralized trust management approach. Basically, trust management systems use a set of unified mechanisms for specifying both security policies and security credentials. The credentials are signed statements (certificates) about what principals are allowed to do.
We have modified the BIND 9 name server to use this approach. For trust management we use the KeyNote 2 library, also used e.g. in OpenBSD’s ISAKMP implementation. Our solution supports the separation of DNS server administration and update authorization. KeyNote also allows specification of more flexible access restrictions than the use of ad hoc access control lists.
By applying state-of-the-art security mechanisms, we have created a more flexible and scalable solution than the existing approaches. We hope this allows more widespread use of DNS dynamic updates.
Page maintaned by Jan