Broadcast Engineer at BellMedia, Computer history buff, compulsive deprecated, disparate hardware hoarder, R/C, robots, arduino, RF, and everything in between.
5268 stories
·
5 followers

Ethernet At 40: From A Napkin Sketch To Multi-Gigabit Links

1 Share

September 30th, 1980 is the day when Ethernet was first commercially introduced, making it exactly forty years ago this year. It was first defined in a patent filed by Xerox as a 10 Mb/s networking protocol in 1975, introduced to the market in 1980 and subsequently standardized in 1983 by the IEEE as IEEE 802.3. Over the next thirty-seven years, this standard would see numerous updates and revisions.

Included in the present Ethernet standard are not just the different speed grades from the original 10 Mbit/s to today’s maximum 400 Gb/s speeds, but also the countless changes to the core protocol to enable these ever higher data rates, not to mention new applications of Ethernet such as power delivery and backplane routing. The reliability and cost-effectiveness of Ethernet would result in the 1990 10BASE-T Ethernet standard (802.3i-1990) that gradually found itself implemented on desktop PCs.

With Ethernet these days being as present as the presumed luminiferous aether that it was named after, this seems like a good point to look at what made Ethernet so different from other solutions, and what changes it had to undergo to keep up with the demands of an ever-more interconnected world.

The novelty of connecting computers

IBM PCs, connected.

These days, most computers and computerized gadgets are little more than expensive paper weights whenever they find themselves disconnected from the global Internet. Back in the 1980s, people were just beginning to catch up on the things one could do with a so-called ‘local area network’, or LAN. Unlike the 1960s and 1970s era of mainframes and terminal systems, a LAN entailed connecting microcomputers (IBM PCs, workstations, etc.) at for example an office or laboratory.

During this transition from sneakernet to Ethernet, office networks would soon involve thousands of nodes, leading to the wonderful centrally managed office network world. With any document available via the network, the world seemed ready for the paperless office. Although that never happened, the ability to communicate and share files via networks (LAN and WAN) has now become a staple of every day life.

Passing the token

The circuitous world of Token Ring configurations.

What did change was the rapidly changing landscape of commodity network technology. Ethernet’s early competition was a loose collection of smaller network protocols. This includes IBM’s Token Ring. Although many myths formed about the presumed weaknesses of Ethernet in the 1980s, summarized by this document (PDF) from the 1988 SIGCOMM Symposium, ultimately Ethernet turned out to be more than sufficient.

Token Ring’s primary points of presumed superiority were determinism instead of Ethernet’s multiple access with collision detection approach (CSMA/CD). This led to the most persistent myth, that Ethernet couldn’t sustain saturation beyond 37% of its bandwidth.

For cost reasons, the early years of Ethernet was dominated by dumb hubs instead of smarter switches. This meant that the Ethernet adapters had to sort out the collisions. And as anyone who has used Ethernet hubs probably knows, the surest sign of a busy Ethernet network was to glance over at the ‘collision’ LED on the hub(s).  As Ethernet switches became more affordable, hubs quickly vanished. Because switches establish routes between two distinct nodes instead of relying on CSMA/CD to sort things out, this prevented the whole collision issue that made hubs (and Ethernet along with it) the target of many jokes, and the myth was busted.

Once Ethernet began to allow for the use of cheaper Cat. 3 (UTP) for 10BASE-T and Cat. 5(e) UTP cables for 100BASE-TX (and related) standards, Ethernet emerged as the dominant networking technology for everything from homes and offices to industrial and automotive applications.

A tree of of choices

The increased spectral bandwidth use of copper wiring by subsequent Ethernet standards.

While the list of standards listed under IEEE 802.3 may seem rather intimidating, a more abbreviated list for the average person can be found on Wikipedia as well. Of these, the ones one most likely has encountered at some point are:

  • 10BASE-T      (10 Mb, Cat. 3).
  • 100BASE-TX (100 Mb, Cat. 5).
  • 1000BASE-T (1 Gb, Cat. 5).
  • 2.5GBASE-T  (2.5 Gb, Cat. 5e).

While the 5GBASE-T and 10GBASE-T standards also have been in use for a few years now, the 25 Gb and 40 Gb versions are definitely reserved for data centers at this point, with the requirement for Cat. 8 cables, and only allowing for runs of up to 36 meters. The remaining standards in the list are primarily aimed at automotive and industrial applications, some of which are fine with 100 Mbit connections.

Still, the time is now slowly arriving where a whole gigabit is no longer enough, as some parts of the world actually have Internet connections that match or exceed this rate. Who knew that at some point a gigabit LAN could become the bottleneck for one’s Internet connection?

ALOHA

The Xerox 9700, the world’s first Ethernet-connected laser printer.

Back in 1972, a handful of engineers over at Xerox’s Palo Alto Research Center (PARC) including Robert “Bob” Metcalfe and David Boggs were assigned the task of creating a LAN technology to provide a way for the Xerox Alto workstation to hook up to the laser printer, which had also been developed at Xerox.

This new network technology would have to allow for hundreds of individual computers to connect simultaneously and feed data to the printer quickly enough. During the design process, Metcalfe used his experience with ALOHAnet, a wireless packet data network developed at the University of Hawaii.

Metcalfe’s first Ethernet sketch.

The primary concept behind ALOHAnet was the use of a shared medium for client transmissions. In order to accomplish this, a protocol was implemented that could be summed up as ‘listen before send’, which would become known as ‘carrier sense multiple access’ (CSMA). This would not only go on to inspire Ethernet, but also WiFi and many other technologies. In the case of Ethernet the aforementioned CSMA/CD formed an integral part of early Ethernet standards.

Coaxial cabling was used for the common medium, which required the use of the cherished terminators at the end of every cable. Adding additional nodes required the use of taps, allowing for the BNC connector on the Ethernet Network Interface Card to be attached to the bus. This first version of Ethernet is also called ‘thicknet’ (10BASE5) due to the rather unwieldy 9.5 mm thick coax cables used. A second version (10BASE2) used much thinner coax cables (RG-58A/U) and was therefore affectionately called ‘thinnet’.

The best plot wist

Don’t forget to terminate your bus.

In the end, it was the use of unshielded, twisted-pair cabling that made Ethernet more attractive than Token Ring. Along with cheaper interface cards, it turned into a no-brainer for people who wanted a LAN at home or the office.

As anyone who has ever installed or managed a 10BASE5 or 10BASE2 network probably knows, interference on the bus, or issues with a tap or AWOL terminator can really ruin a day. Not that figuring out where the token dropped off the Token Ring network is a happy occasion, mind you. Although the common-medium, ‘aether’ part of Ethernet has long been replaced by networks of switches, I’m sure many IT professionals are much happier with the star architecture.

Thus it is that we come from the sunny islands of Hawaii to the technology that powers our home LANs and data centers. Maybe something else would have come along to do what Ethernet does today, but personally I’m quite happy with how things worked out. I remember the first LAN that got put in place at my house during the late 90s as a kid, first to allow my younger brother and I to share files (i.e. LAN gaming), then later to share the cable internet connection. It allowed me to get up to speed with this world of IPX/SPX, TCP/IP and much more network-related stuff, in addition to the joys of LAN parties and being the system administrator for the entire family.

Happy birthday, Ethernet. Here is to another forty innovative, revolutionary years.

Read the whole story
tekvax
2 days ago
reply
Burlington, Ontario
Share this story
Delete

Does Your Phone Need a RAM Drive?

1 Comment

Phones used to be phones. Then we got cordless phones which were part phone and part radio. Then we got cell phones. But with smartphones, we have a phone that is both a radio and a computer. Tiny battery operated computers are typically a bit anemic, but as technology marches forward, those tiny computers grew to the point that they outpace desktop machines from a few years ago. That means more and more phones are incorporating technology we used to reserve for desktop computers and servers. Case in point: Xiaomi now has a smartphone that sports a RAM drive. Is this really necessary?

While people like to say you can never be too rich or too thin, memory can never be too big or too fast. Unfortunately, that’s always been a zero-sum game. Fast memory tends to be lower-density while large capacity memory tends to be slower. The fastest common memory is static RAM, but that requires a lot of area on a chip per bit and also consumes a lot of power. That’s why most computers and devices use dynamic RAM for main storage. Since each bit is little more than a capacitor, the density is good and power requirements are reasonable. The downside? Internally, the memory needs a rewrite when read or periodically before the tiny capacitors discharge.

Although dynamic RAM density is high, flash memory still serves as the “disk drive” for most phones. It is dense, cheap, and — unlike RAM — holds data with no power. The downside is the interface to it is cumbersome and relatively slow despite new standards to improve throughput. There’s virtually no way the type of flash memory used in a typical phone will ever match the access speeds you can get with RAM.

So, are our phones held back by the speed of the flash? Are they calling out for a new paradigm that taps the speed of RAM whenever possible? Let’s unpack this issue.

Yes, But…

Source PXFuelIf your goal is speed then, one answer has always been to make a RAM disk. These were staples in the old days when you had very slow disk drives. Linux often mounts transient data using tmpfs which is effectively a RAM drive. A disk that refers to RAM instead of flash memory (or anything slower) is going to be super fast by comparison to a normal drive.

But does that really matter on these phones? I’m not saying you don’t want your phone to run fast, especially if you are trying to do something like gaming or augmented reality rendering. What I’m saying is this: modern operating systems don’t make such a major distinction between disk and memory. They can load frequently used data from disk in RAM caches or buffers and manage that quite well. So what advantage is there in storing stuff in RAM all the time? If you just copy a flash drive to RAM and then write it back before you shut down, that will certainly improve speed, but you will also waste a lot of time grabbing stuff you never need.

Implementation

According to reports, the DRAM in Xiaomi’s phone can reach up to 44GB/s compared to the flash memory’s 1.7GB/s reads and .75GB/s writes. Those are all theoretical maximums, of course, so take that with a grain of salt, but the ratio should be similar even with real-world measurements.

The argument is that (according to Xiaomi) games could install and load 40% to 60% faster. But this begs the question: How did the game get into RAM to start with? At first we thought the idea was to copy the entire flash to RAM, but that appears to not be the case. Instead, the concept is to load games directly into the RAM drive from the network and then mark them so the user can see that they will disappear on a reboot. The launcher will show a special icon on the home screen to warn you that the game is only temporary.

So it seems like unless your phone is never turned off, you are trading a few seconds of load time for repeatedly installing the game over the network. I don’t think that’s much of a use case. I’d rather have the device intelligently pin data in a cache. In other words, allow a bit on game files that tell them to stay in cache until there is simply no choice but to evict them and you’d have a better system. A comparatively fast load from flash memory once, followed by very fast startups on subsequent executions until the phone powers down. The difference is you won’t have to reinstall every time you reset the phone.

This is Not a Hardware RAM Drive

There have been hardware RAM drives, but that’s really a different animal. Software RAM drives that take part of main memory and make it look like a disk appears to have originated in the UK around 1980 in the form of Silicon Disk System for CP/M and, later MSDOS. Other computers of that era were known to support the technique including Apple, Commodore, and Atari, among others.

In 1984, IBM differentiated PCDOS from MSDOS by adding a RAM disk driver, something Microsoft would duplicate in 1986. However, all of these machines had relatively low amounts of memory and couldn’t spare much for general-purpose buffering. Allowing a human to determine that it made sense to keep a specific set of files in RAM was a better solution back then.

On the other hand, what the Xiaomi design does have one important feature. It is good press. We wouldn’t be talking about this phone if they hadn’t incorporated a RAM drive. I’m just not sure it matters much in real-life use.

We’ve seen RAM disks cache browser files that are not important to store across reboots and that usually works well. It is also a pretty common trick in Linux. Even then, the real advantage isn’t the faster memory as much as removing the need to write cached data to slow disks when it doesn’t need to persist anyway.

Read the whole story
tekvax
2 days ago
reply
I remember these well.
Linux still uses them to mount kernel modules while booting!
Burlington, Ontario
Share this story
Delete

Meryl Streep reads from "Trumpty Dumpty," John Lithgow's new book

1 Share

Yes, John Lithgow has authored and illustrated a book. Actually, Trumpty Dumpty Wanted a Crown: Verses for a Despotic Age ($23) is his second volume of satirical poems aimed at Trump. The first being Dumpty, which was released last October. — Read the rest

Read the whole story
tekvax
2 days ago
reply
Burlington, Ontario
Share this story
Delete

Coca Cola is dropping Tab

1 Comment

Tab. What a beautiful drink. Tab. For beautiful people.

Beautiful people are at a loss for what to drink after learning that Coca-Cola will no longer make their beloved diet soda. The famously awful tasting beverage is being canceled along with others in a purge of underperforming products. — Read the rest

Read the whole story
tekvax
2 days ago
reply
I haven't seen Tab for sale anywhere, for many many years...
Burlington, Ontario
Share this story
Delete

DIY transparent screen (cat not included)

1 Share

Makers Evan and Katelyn have a fun new video where they create a transparent screen from a standard LCD monitor, then have some fun with it.

The Supervisor (aka their cat) becomes the first demonstration.

Image: YouTube / EvanAndKatelyn

Read the whole story
tekvax
2 days ago
reply
Burlington, Ontario
Share this story
Delete

Man builds a retractable plasma lightsaber powered by propane

1 Share

Okay so it's not a elegantly crafted used of Kyber crystals, and more of a hyper-focused propane torch. But still. At least you can customize blade colors with it! As Nerdist points out, it's more akin to the canonical Mustafarian Proto-Sabers. — Read the rest

Read the whole story
tekvax
3 days ago
reply
Burlington, Ontario
Share this story
Delete
Next Page of Stories