Regaining Single Sign-On

Background

In the mainframe era, computer users only had to remember one username and password as there was only one computer to access. With the advent of networks, people suddenly acquired many computing accounts each with a username and password to be remembered. Systems like YP, Kerberos, and NDS were develped to bring the problem under control and for a while many users could access all their computing resources with a single password again. Unfortunately, the rise of networked dataset providers has caused a new explosion in usernames and passwords to be managed. This is bad enough for an individual user who might now have four or five identities to remember, but it is becoming a nightmare for the support organisations that must issue and manage the passwords.

A new group of users is now emerging that is not well served by existing mechanisms: Brunel people who need access to resources remotely. Users of our own dial-in modems get a service similar to the on-campus experience, but students on work placement and people using commercial ISPs may not get access to certain datasets and `internal use only' web pages at Brunel.

From a systems point of view, the need for improved levels of security has led to a number of crypto-based authentication schemes becoming common. The quality of these designs varies widely, and they are generally incompatible so a site wishing to have secure authentication for all services may have to maintain several different forms of `encrypted password'. Very few of the systems in common use at present support easy and scalable interworking between authentication domains.

User Authentication at Brunel

For many years, Brunel has worked hard to maintain a `single service image'. A central feature of this policy is the single username / single password system. This makes life easier for computer users, and at the same time designs-out a large source of support calls.

The system design is based on one of the most critical rules of good practice in databases: each item of data should have a single master copy. Thus there is a single authentication database and all user-authentication transactions refer to it. (Note that the rule does not preclude slave copies so we maintain a number of those for resilience and performance).

Where outside data services such as BIDS have required user authentication we have generally provided a scripted logon system. A privileged program first checks that the requesting user is permitted to access the service they have requested, and then invisibly handles the connection and logon stages. This approach works well for character-mode services and has been used with both X.25 and telnet for many years.

The arrival of commercial web-based datasets has brought new problems. Providers have chosen different methods to authenticate users. Some use the IP address that originates the connection, some take this further and make it work correctly with caches and proxies, and others require the use of `HT-access' web passwords. Site-licenced services using IP addesses for authentication are easy to support by simple cache configuration, but password-based services need a new approach. The group of services using ATHENS are a particular case in point: usernames and passwords were recently issued to all Brunel staff and students for these services, and a significant number of `lost password' support calls have already occurred.

Solving the immediate problems

The web-based datasets are the main user-visible deficiency in our single-password service. One obvious place to attack this problem is at the Brunel web cache. Another is at the Brunel Web server, by the provision of CGI scripts to act as proxy agents. Two aspects need to be considered in either case:

Remote users will be able to benefit from this transparently by using the Brunel cache when accessing controlled sites.

It may be possible for the cache or server to do `lazy authentication', to avoid asking users for their username and password unless they access a service that needs authentication.

Authentication to the Cache

Authentication to the cache can be done in several ways. The most obvious is to require the user to supply their Brunel username and password to the cache when they use it. This is supported as a standard feature in recent browsers and cache modules are available to implement it.

Advantages:

Disadvantages:

The cache can also support IDENT authentication. This is transparent to the user, but in its basic form is far from secure.

Advantages:

Disadvantages:

It may be possible to combine the IDENT scheme with a crypto-signed result such as a Kerberos ticket to work around the security issues.

An interesting possibility is raised by CMU's Project Minotaur which uses browser plugins and Java to transparently add Kerberos authentication to web requests. It may be possible to use the same technique to pass other forms of credentials.

A very strong authentication scheme is becoming available, based on client-side X.509 certificates. Many users are already familiar with the `secure server' facility that shows up as a locked padlock icon on the browser toolbar, but few know that they can have a certificate of their own. By issuing certificates to all users we would enable them to prove their identity as a Brunel user without ever exposing a password on the network.

Advantages:

Disadvantages:

Insertion of authentication data

The Squid cache has a modular structure, allowing various features to be enabled when needed. One module is a general rewriting service, and we may be able to use it to insert appropriate authentication data for remote services.

Some services have a single ID to be shared by all Brunel users, and others require a separate ID per person. The rewriting module would need access to a database describing each service and containing the authentication data to be sent. This database would need to be carefully protected, perhaps by encrypting the more sensitive fields.

Proxies and Redirectors

The cache is not the only place that authentication data can be manipulated. A number of CGI-based proxies and redirectors exist for similar purposes. Some of these act as agents, fetching pages from remote services and rewriting the URLs to appear local. Others co-operate with content providers by creating cookies that can be used by the content providers server to authenticate requests.

Examples to be investigated include:

Many of the issues associated with these systems are the same as those for the cache-based systems discussed above, with the exception that offsite users may not need to reconfigure their browsers to benefit.

The JTAP-funded CANDLE-Athens project is investigating similar systems and has already expressed interest in discussing SSSO issues with Brunel. Related EU-funded projects CANDLE, PRIDE, and DESIRE also provide useful information.

Single authentication per session

Reducing the number of passwords to be managed is only part of the wider Single Sign-On problem. Users get frustrated if they are asked to type their password several times during a session, and every time they type it the password is at risk from `shoulder-surfers'. Ideally the PC or workstation should act as the user's agent for the duration of their session and should handle all authentications to remote services transparently.

Services that can give rise to re-authentication include Web, Mail (IMAP/POP etc), and PC-to-Unix telnet and X connections. Other client-server systems such as SQLnet and Dolphin also require their own authentication. (In some cases the re-authentication is valuable, particularly where the service being accessed needs greater protection than the user's own files).

Unfortunately almost every service has its own authentication scheme built into its protocol. There is work in progress in IETF to build generic mechanisms but there are still several competing proposals and few client programs implement any of them. For this reason it is unlikely that we will find a complete solution in the near future, but we should be able to make progress in one or two areas.

A relevant technology here is ssh - the Secure Shell. We already use this to protect sensitive inter-workstation traffic on Unix, and it is also available for NT. SSH provides a general wrapper service that can be used to protect and authenticate certain other protocols.

Another important technology is the GSS-API (Generic Security Services Application Programming Interface) which allows authentication and authorisation issues to be handled by generic re-usable code modules. It is believed that a few recent client programs (e.g. Simeon) make use of this, which might allow us to remove the need for users to give their password again just to read mail.

Longer-term development

It is clear that certificate-based systems will have an important role to play in the future, so we should track development and be prepared to build a pilot system within the next year or so.

The limitations of passwords for authentication are well known. We will almost certainly require higher security for some systems in the future. It is likely that higher-security systems will be based on hardware `tokens' - probably some form of smart-card, though an interesting development is the cryptographic button which can be mounted on a signet ring. Many token-based systems will require readers on all PCs and workstations, so cost will be a limiting factor in their deployment. As secure tokens are likely to work with a form of public-key certificate, they should be tracked alongside the certificate-based systems.

The Desktop Management Taskforce` (DMTF) and the Microsoft/Cisco Directory-Enabled Networks (DEN) initiative have merged their schema to permit a single directory to be used to manage an organisation's resources. As access to the data will use LDAP in most cases it should be possible to provide some level of inter-domain working, though the practicalities will depend on how many proprietary extensions get added by suppliers. A directory is a critical part of a Public Key Infrastructure, so LDAP has a central role in certificate systems as well.

We should track the DMTF/DEN work and be ready to take advantage of any products at the appropriate time. In parallel we should consider implementing LDAP as the management database for user authentication data and possibly also for mail-system configuration. LDAP has a rich information model, which would allow us to support the various different authentication flavours needed for Web, NT, IMAP, etc.

Immediate action

The largest immediate benefit to users would come from improving web authentication, so Systems will define and implement a project in this area. The aims will be:


Andrew Findlay
Head of Networking and Systems
21 April 1999
Updated 23 April 1999
Reference to Brown proxy added March 2000
Andrew.Findlay@brunel.ac.uk