Network Performance


Note: This paper was written in 1995 so some of the figures are out of date. The general principles remain true.

What is a reasonable network performance? What can be done to improve it?

These are very difficult questions to answer because `the network' does so many different things for so many people, but this article should help you to understand the issues.

There are four main factors that affect perceived network performance: Bandwidth, Hardware Problems, User choices, and Server Load.

Bandwidth

The capacity of a network to transmit data is called Bandwidth, and it is expressed in bits per second. Within each campus, most of the Brunel network operates at 10Mb/s (ten million bits per second). This is enough to transfer about one megabyte per second. Each 10Mb/s connection serves a number of rooms: often a whole building, although teaching classrooms normally have separate connections. Sharing bandwidth is a very cost-effective system and it does not often result in major congestion: peak loads seldom exceed 30% of the available capacity on our busiest networks.

New applications such as video-conferencing will use more bandwidth so we are monitoring the situation closely. When the traffic on part of the network gets too high we can split it into smaller sections so that the available bandwidth is shared by less people. The connections between the main network routers operate at 100Mb/s.

The connection between Uxbridge and Runnymede runs at 2Mb/s in each direction, and the link from Uxbridge to Osterley runs at the same speed. This is fast enough that it does not have a noticeable effect on the performance of most applications.

Brunel's link to SuperJANET is a more complex thing to describe. The link connecting us to BT's SMDS exchange runs at 34Mb/s but this bandwidth is not all available to us. The limit on traffic coming into Brunel is about 25Mb/s but the outbound limit is 10Mb/s (soon to be reduced to 4Mb/s, but we have the option of subscribing to higher speeds by paying the appropriate annual fees). About 80 of the UK's more active universities and research sites are connected to SuperJANET on the same basis, and traffic is supposed to flow between the sites without restriction. It has not yet been possible to demonstrate this large traffic flow, but experience so far has been good. This means that many UK sites are connected at speeds comparable to those of our local network.

Within the UK Academic Community there are also several hundred smaller colleges and research centres connected at speeds ranging from 9.6kb/s to 2Mb/s, with 64kb/s being a very common speed. When communicating with such sites the performance bottleneck is usually the site access line. A 64kb/s link will transfer about 8k Bytes per second, so loading a 1MB file across such a link will take at least two minutes. Dial-up modem links often operate at less than a quarter of this speed.

Links to commercial Internet providers in the UK are now quite good, but very few commercial sites even have 64Kb/s connections so do not expect fast transfers to and from `co.uk' sites.

International links are a major bottleneck. Most countries in Western Europe now have internal networks running at 34Mb/s or above, but links between countries tend to be at 2Mb/s or less. The reason is very simple: money. A trans-border 2Mb/s link typically costs 250000 pounds per year at each end. The problem is worse in continental Europe than it is in the UK, which leads to the weird situation that a link to New York costs less than one to Stuttgart! The UK Academic Community currently has about 4Mb/s available to the US and Far East, and about 4Mb/s to mainland Europe. All the international links are heavily overloaded, with the US link being worst in the afternoon because the Americans are then awake and using it as well as us.

An EC project called EUROCAIRN is trying to improve the situation within Europe, but there does not appear to be enough political support to force the PTTs (phone companies) to charge reasonable tariffs. Putting this into perspective: it is estimated that the capital cost of the latest transatlantic cable was equivalent to 100000 pounds per 2Mb/s channel. At current tariffs that would pay back in three months if all channels were sold. Maybe I should buy some BT shares after all....

Hardware Problems

Most offices and labs at Brunel are wired using `thinwire co-ax' cable, sometimes called 10Base2. The connectors are `BNC' bayonet fittings. When correctly used, this is a good cost-effective cabling system but it does suffer from certain problems in inexperienced hands.

The most common problem is people who disconnect a PC from the wall and take the cables away. As each `segment' of cable is shared with up to 30 other computers, a lot of people can be affected by one mistake. Even computers that are on the `live' side of the break will stop working because reflections are set up by the open end of the cable. This fault can be recognised easily: a whole row of offices lose access to the network completely.

A more subtle problem is caused by incorrect extension of cables: the `T' connector must be directly on the back of the computer, with both wires from the wall going to it. Simply adding wire between a `T' and a computer will cause enough reflections in the cable to affect the network. This fault can be very difficult to recognise: sometimes a row of offices loses contact, but more often they get very poor response because some packets get through and others don't.

Similar problems can be caused by using the wrong type of cable or connector. BNC connectors and co-ax cable are also used for lab equipment and video links. Unfortunately, most such cables are 75-ohm types rather than the 50-ohm cable used for networking. It is extremely difficult to tell the difference by sight: special test equipment is needed.

Sometimes people trip over carelessly-routed cables, or simply pull on them to create slack. This can result in damage to the connectors: we sometimes find plugs whose centre pin has been pulled back into the body of the plug. Again, this can be a very difficult fault to recognise though it is usually easy to find the offending plug once the problem has been diagnosed. Symptoms include flaky performance and high error statistics on network interfaces.

The most intractable problem of all is over-extension of cable segments. There are more than 200 ethernet segments in the Brunel network, and each is limited to a maximum of 185m of cable and 30 connected devices. The recent explosive growth in demand for network connections has broken the design assumptions that were made when the original wiring was installed. As a result we often find segments with too much wire and too many computers attached. The only cure is to install new wiring and more network repeaters but this costs money and Computer Centre budgets are not expanding fast enough to meet the demand. The symptoms of over-extended cables vary greatly: some machines almost stop working (laptop PCs are worst here) while others carry on almost without noticing (Sun workstations seem particularly immune); network error rates may rise, collision rates usually do rise although other things can cause the same effect.

In the longer term, the solution to most of the wiring problems is to convert to an `Unshielded Twisted Pair' (UTP) structured wiring system. Under this scheme every computer has a separate connection back to a network hub so it is much less likely to interfere with its neighbours. Higher speeds (100Mb/s or more) can also be handled easily. UTP wiring is installed in Mill and Faraday halls, and is always used for new installations. Converting the existing setup in the academic buildings will be a large job costing between 150 and 250 pounds per office if reasonable groups of rooms are re-wired at the same time.

Other hardware problems concern ethernet adapter cards and PC configurations. There are a great number of companies producing ethernet cards for PCs, and some of them are not very good. We have found cards that will not work if connected to segments with more than 20m of cable, and cards that fail if the network is even slightly busy. Adapters that connect to PC parallel ports seem to be a particular problem because the parallel port is not really fast enough to transfer ethernet data. Configuration of PC interrupts and I/O areas is another area fraught with difficulty, and problems can show up months after an apparently successful installation. The best advice for PC users is to only buy the Computer Centre recommended PCs and ethernet adapters: they may not be the fastest on the market but at least we have a lot of experience with setting them up!

Finally, a most important rule: never connect anything other than a computer to the network! Quite apart from the obvious things like telephones and kettles, you should never connect other types of network equipment (repeaters, bridges, routers, media-converters) because you might be breaking the configuration rules, and Computer Centre staff depend on their knowledge of network topology when looking for faults.

User Choices

The way you use your computer and the programs you run can affect its performance and the performance of other people's computers. In an ideal world this would not be the case, but it does help to know a bit about what is going on...

Always remember that local disks tend to be faster than networked ones, so if you are working on large files it may be best to copy them to a local disk first and copy them back to the network filestore at the end of the job. Don't forget to copy them back though, as the network filestore gets backed up regularly and most local disks don't!

Do not open more windows than you really need. Under MS-Windows, every open application window can slow down every other application, even if it is not in active use. Even in Unix, an open window must consume some resources though the impact tends to be small. Some applications remain active even when not being used directly: desk clocks and constantly-changing backgrounds are obvious cases, but some wordprocessors do this too!

Putting extra network filesystems into your PATH variable can degrade performance, so do not issue unnecessary `use' commands.

Avoid running things that wake up every few seconds and look for new files or new logins on interactive machines. Some of these have an enormous impact on performance.

Some apparently irrelevant things can have large effects on performance: if you think you are getting poor network performance try comparing your setup and style of use carefully with a neighbour. If you still cannot resolve the problem, contact User Support for advice.

Server Load

Many network services are provided by machines that serve tens or hundreds of people at a time. This obviously has an effect on performance, though not always where you might expect! Consider some numbers:

There are 7800 taught-course students registered on the Brunel network. All their home-directory (drive H: in DOS terms) files are served by three computers at Uxbridge and one at Runnymede. That works out at about 2000 users per computer, yet the load on these machines is acceptable because very little of the traffic generated by each person relates to their private filestore. Similarly there are 4000 staff and researchers whose files are spread across three machines at Uxbridge while Runnymede staff share their server machine with the students. These machines only tend to give poor performance if someone writes a program that does something very unusual with their files.

There were about 1600 PCs active on the network in the month of June 1995. Most of the programs used on these machines are held on the `PC servers' (The servers are not PCs at all, but they exist mainly to service PCs). There are eleven such servers holding identical copies of each file so they can share the workload. These machines are often blamed for poor response times, and the load figures provide some support for this view. We estimate that the current type of server machine will support 30 classroom PCs or 100 office PCs with acceptable performance, though the increasing size of most application software will force us to change the estimates or upgrade the servers quite soon. It is easy to do the sums and discover that there are more PCs in use than we have server capacity for. Fortunately the estimates are not hard limits, and for much of the day the performance is quite acceptable. The problems come at peak times such as class changeovers, staff arriving in the morning or returning from lunch etc. Anyone who can adapt their working patterns to avoid the peaks will get better service.

You can help

The way the network is used affects its performance. I do not want to discourage use of the network or even restrict what it is used for, but in our resource-limited environment we must try to get the best value from what we have. I gave some suggestions in the `User Choices' section above; here are some more:

Conserve international bandwidth by fetching files from UK sites if at all possible. The UKUUG/SunSite archive at src.doc.ic.ac.uk has a vast collection of archives, as do the HENSA sites micros.hensa.ac.uk and unix.hensa.ac.uk

Take care of the network wiring: don't pull on the cable, don't use odd bits of cable that happens to look similar, and don't ask for longer network cables than you really need.

And of course: make the University successful so that we get the resources to improve the infrastructure!

Andrew Findlay
Head of Networking and Systems
July 1995

URL: http://www.brunel.ac.uk/depts/cc/papers/network-performance.html
Page maintained by The Computing Service
Please e-mail any comments to networks@brunel.ac.uk
Last reviewed :4 April 2000


[reddot]Brunel Home[reddot]Top of Page