ETHERNET
home



10 Gigabit Ethernet

Since its inception at Xerox Corporation in the early 1970s, Ethernet has been the dominant networking protocol. Of all current networking
protocols, Ethernet has, by far, the highest number of installed ports and provides the greatest cost performance relative to Token Ring, Fiber
Distributed Data Interface (FDDI), and ATM for desktop connectivity. Fast Ethernet, which increased Ethernet speed from 10 to 100 megabits per second (Mbps), provided a simple, cost-effective option for backbone and server connectivity.

Gigabit Ethernet builds on top of the Ethernet protocol, but increases speed tenfold over Fast Ethernet to 1000 Mbps, or 1 gigabit per second (Gbps). This protocol, which was standardized in June 1998, promises to be a dominant player in high-speed local area network backbones and server connectivity. Since Gigabit Ethernet significantly leverages on Ethernet, customers will be able to leverage their existing knowledge base to manage and maintain gigabit networks. Visit www.cisco.com

History of Ethernet

The Fast Ethernet standard was pushed by an industry consortium called the Fast Ethernet Alliance. A similar alliance, called the Gigabit Ethernet Alliance was formed by 11 companies in May 1996 , soon after IEEE
announced the formation of the 802.3z Gigabit Ethernet Standards project. At last count, there were over 95 companies in the alliance from the networking, computer and integrated circuit industries.

The original 802.3 standard was published in 1985. Originally two types of coaxial cables were used called Thick Ethernetand Thin Ethernet. Later unshielded copper twisted pair (UTP) , used for telephones, was added.

In 1995, IEEE adopted the 802.3u Fast Ethernet standard. Fast Ethernet is a 100 Mbps Ethernet standard. Fast Ethernet established Ethernet scalability. With Fast Ethernet came full-duplex Ethernet. Until, now, all Ethernets worked in half-duplex mode, that is, if there were only two station on a segment, both could not transmit simultaneously. With full-duplex operation, this was now possible.

The next step in the evolution of Ethernet is Gigabit Ethernet. The standard is being developed by the IEEE 802.3z committee.

Physical Layer
 

The Physical Layer of Gigabit Ethernet uses a mixture of proven technologies from the original Ethernet and the ANSI X3T11 Fibre Channel Specification. Gigabit Ethernet is finally expected to support 4 physical media
types . These will be defined in 802.3z (1000Base-X) and 802.3ab (1000Base-T).

The 1000Base-X standard is based on the Fibre Channel Physical Layer. Fibre Channel is an interconnection technology for connecting workstations, supercomputers, storage devices and peripherals. Fibre Channel has
a 4 layer architecture. The lowest two layers FC-0 (Interface and media) and FC-1 (Encode/Decode) are used in Gigabit Ethernet. Since Fibre Channel is a proven technology, re-using it will greatly reduce the Gigabit
Ethernet standard development time.

Three types of media are include in the 1000Base-X standard :

       1000Base-SX850 nm laser on multi mode fiber.
       1000Base-LX1300 nm laser on single mode and multi mode fiber.
       1000Base-CXShort haul copper "twinax" STP (Shielded Twisted Pair) cable

1000Base-T is a standard for Gigabit Ethernet over long haul copper UTP. The standards committee's goals are to allow up to 25-100 m over 4 pairs of Category 5 UTP. This standard is being developed by the 802.3ab
task force and is expected to be completed by early 1999.
 

MAC Layer

The MAC Layer of Gigabit Ethernet uses the same CSMA/CD protocol as Ethernet. The maximum length of a cable segment used to connect stations is limited by the CSMA/CD protocol. If two stations simultaneously detect an idle medium and start transmitting, a collision occurs.

Ethernet has a minimum frame size of 64 bytes. The reason for having a minimum size frame is to prevent a station from completing the transmission of a frame before the first bit has reached the far end of the cable, where it may collide with another frame. Therefore, the minimum time to detect a collision is the time it takes for the signal to propagate from one end of the cable to the other. This minimum time is called the Slot Time. (A more useful metric is Slot Size, the number of bytes that can be transmitted in one Slot Time. In Ethernet, the slot size is 64 bytes, the minimum frame length.)

The maximum cable length permitted in Ethernet is 2.5 km (with a maximum of four repeaters on any path). As the bit rate increases, the sender transmits the frame faster. As a result, if the same frames sizes and cable lengths are maintained, then a station may transmit a frame too fast and not detect a collision at the other end of the cable. So, one of two things has to be done : (i) Keep the maximum cable length and increase the slot
time ( and therefore, minimum frame size) OR (ii) keep the slot time same and decrease the maximum cable length OR both. In Fast Ethernet, the maximum cable length is reduced to only 100 meters, leaving the minimum frame size and slot time intact.

Gigabit Ethernet maintains the minimum and maximum frame sizes of Ethernet. Since, Gigabit Ethernet is 10 times faster than Fast Ethernet, to maintain the same slot size, maximum cable length would have to be reduced to about 10 meters, which is not very useful. Instead, Gigabit Ethernet uses a bigger slot size of 512 bytes. To maintain compatibility with Ethernet, the minimum frame size is not increased, but the "carrier event" is extended. If the frame is shorter than 512 bytes, then it is padded with extension symbols. These are special symbols, which cannot occur in the payload. This process is called Carrier Extension.
 


top