September 2014 Column – Part 2

CHAOS MANOR REVIEWS
Computing at Chaos Manor
Column 369 – Part 2 of 4
September, 2014

The September 2014 column continues with this installment which discusses Updating Systems, Hard Drive Lifecycles, and Faster and Faster. The previous installment is here; use the Newsletter signup to receive email notification of publication.

Updating the Systems

Back in early 2012 Eric and I built a Sandy Bridge (the Intel code name for 2nd Generation Core Processors; this one is Core i7) 64-bit system with 200 GB SSD C: drive and a terabyte of Seagate spinning metal as drive D:. It – we’re not sure of the sex of this system – was built in a Thermaltake case that runs fast and cool and is astonishingly easy to use with accessible ports. It’s rugged. And reliable. We called it Alien Artifact; I expect you can see why from the photo. It was intended to replace Bette, an Intel Core 2 Quad CPU Q6600 2.4 GHz system. The components were chosen to maximize performance for minimum cost, which is to say they were on the “sweet spot” on the performance/cost curve. Bette has served us well since she was built in 2008, back when I was recovering from 50,000 rads of hard x-ray to the head.

Bette, an Intel Core 2 Quad built in 2008, which has served as the “main machine” here for years. Other systems are used for games and large downloads. Bette does email and the daybook, and has been highly reliable.

Bette, an Intel Core 2 Quad built in 2008, which has served as the “main machine” here for years. Other systems are used for games and large downloads. Bette does email and the daybook, and has been highly reliable.

Eric did most of the construction of the new system, and we did some experiments with SSD drives. Meanwhile Bette went right on working as my “main machine” on which I get my Outlook mail, write my daybook log, and do just about everything but games. She’s woefully slow compared to Alien Artifact, but she’s reliable. (So is Alien Artifact, mind you. And that Thermaltake case is elegant inside and out.)

Alien Artifact, a Sandy Bridge computer built in a Thermaltake case is cool, fast and very quiet. This will shortly be moved into my office as a main system, taking over from Bette who will be partly retired.

Alien Artifact, a Sandy Bridge computer built in a Thermaltake case is cool, fast and very quiet. This will shortly be moved into my office as a main system, taking over from Bette who will be partly retired.

The upshot is that I never did bring Alien Artifact in here, and it sits out in the Great Hall at a work station there, connected to the net, and used for a lot of tests. It got finished just as I stopped meeting my deadlines, and I never did a full review of the system. One of the ways we will bring Chaos Manor up to date is to put Bette out to pasture and bring in Alien Artifact. I confess a certain sadness at doing this, but there’s a powerful reason.

Hard Drive Life Cycles

Hard drives have a life cycle (see article here). After the first few hours of infant mortality failure, there’s a period of about three years of fairly high reliability, then a sudden plunge to about 11% a year failure rate.

Bette has been with us since 2008, so she is well over 3 years old, and she’s been used hard every day. It is now time and past time to either replace her or replace her drives.

Hard drives rarely fail catastrophically and without warning. Disk software has improved every year. More likely is they slow down as there are more read and write errors, retries, and closing off of bad sectors, until eventually you notice something is wrong. By the time you notice, though, the chances of actual catastrophic failure have increased, and it really is time to do something. We haven’t reached that stage yet, but at her age it’s only a matter of time.

Disk software gets better every year. Meanwhile Solid State Drives (SSD) get cheaper per gigabyte, but SSD hasn’t caught up with spinning metal in price/performance. On the other hand, SSD is so much faster than spinning metal – even new spinning metal – that you really want your operating system (about 30-35 GB for Windows) and your most important disk operations to be done on silicon if you can.

Bob Thompson notes that one operation you may not want on SSD is the swap/paging file. “The cells in even an SLC SSD wear out after a large number of writes, and those in consumer-grade MLC SSDs wear out an order of magnitude faster.” A simple solution is a small – 64 GB – SSD devoted to be the swap file. They’re cheap enough, and the speed improvements are worth it.

Our solution has been to use a good “sweet spot” SSD as the C: drive, and a new terabyte or greater spinning disk hard drive as D:. In particular, if you use Outlook you want all of Outlook’s pst files on an SSD; that speeds up mail operations quite noticeably. Of course if you have to accept press releases and your address is generally known, meaning that you have an elaborate system of spam filters in place, you’ll also want a powerful (and power using, which means hot) CPU to apply all those rules to each mail as it comes in.

A long time ago I postulated one of Pournelle’s Laws as “Silicon is cheaper than iron” and thus solid state drives as opposed to spinning metal would be the wave of the future; and so they are. Of course I did not believe it would take three decades for that to happen. But for the moment an SSD 200 Gb C: and a spinning metal terabyte D: will be Good Enough for almost anyone but a fanatic gamer.

Incidentally, from available data, it seems that modern SSD drives have a low failure rate for more than 5 years.

Faster and Faster

802.11a/b/g/n/ac run on two radio bands: 2.4 GHz and 5.1 to 5.8 GHz. With the wind behind it, 802.11ac Phase 2 may get 2 Gbps communications, but it’s not the fastest commercial short-distance wireless network: That would be WiGig.

WiGig started out as a separate standard, using the 60 GHz communications band (V-band, for you microwave engineers). It promises 4 to 7 Gbps speeds now, and more in the future. 60 GHz is an interesting animal: It works very well over short distances, but it’s highly attenuated by oxygen absorption, so it hasn’t been used much for ground-based communications. It’s also stopped by nearly any wall or structure: Minimal risks of interference or eavesdropping. Those characteristics make it nearly perfect for in-room communications: Stream a program from your laptop to the TV (Miracast, cousin to Apple’s AirPlay), transfer files wirelessly faster than Gbps Ethernet, talk to your big box of disks, etc.

The WiGig team finally wised up and combined their efforts with the Wi-Fi Alliance, in a little-noticed announcement at CES 2013. That was iportant, because it brought the Wi-Fi interoperability standards, marketing and testing imprimatur onboard. Astute readers will remember that the last non-Wi-Fi wireless proposal, Ultra Wide Band, didn’t go anywhere, due to technical and marketing reasons.

Peter Glaskowsky, former editor of MicroProcessor Report, says

 “As for WiGig, I think this ends up being Intel’s second effort to leverage the popularity of Wi-Fi to promote something that is nothing like Wi-Fi, the first one being WiMAX. WiGig isn’t going to be used for “networking” at all, at least not at first, but rather as a replacement for point-to-point wires such as the ones that connect a computer to a display. Many WiGig devices won’t interoperate with Wi-Fi at all.

“I was at IDF last week, and while Intel did a lot to whet the public appetite for truly wireless computers, they’re a long way from delivering the necessary technologies in forms suitable for mass adoption. Rezence doesn’t fully solve the problem of cordless charging of laptops, and WiGig isn’t nearly as fast as Intel’s own Lightning wired interface, as used by Apple and various Windows OEMs for a few years now.”

 

At IDF this year, Intel proposed a standard for 60 GHz docking stations: Two monitors, wireless connection, It Just Works, also supporting your disk farm and all your other peripherals. This is all slated as part of their Maple Peak wireless system, and it’s promised mid-2015 for ultrabooks and the like. (For the record, Intel has been pushing this idea since at least 2011, but the silicon had to catch up; Moore’s Law again.)

It doesn’t end there: Drop your computer onto a charging pad, and it will use the Rezence standard for inductive wireless charging, up to 50 Watts. Communications and charging, without wires. I can’t wait.

But it’s not really wireless: The dirty little secret of wireless is that it takes lots of wires, to connect outside the room. For communicating outside the room, say, to your video server, backup drive, or to play TV from the living room to the bedroom, you’ll want speeds greater than Gigabit, which will probably mean 10 Gbit wired Ethernet will come into its own. We don’t need 10 Gbit wired Ethernet just now, but think about that next time you have hardware update decisions. You’ll want 10 GBit sooner than you think. In other words, this is an area where Good Enough won’t stand still for long.

Good Enough and Ethernet

Gigabit Ethernet as a standard goes back to 1998—a millennium in computer time. In the last sixteen years, it’s gotten so cheap that any wired device supports it, and you can buy $40 8-port Gigabit Ethernet switches that work quite well; they’re a commodity now for all but the most critical installations. And Gigabit Ethernet has been Good Enough everywhere for a decade, outside the data center.

10 Gigabit Ethernet over copper cabling (IEEE 802.3an), and a gaggle of related standards, are routine in the data center, enabling Software Defined Networking (SDN) and leaving FDDI to languish. 40 and 100 Gigabit Ethernet are in the pipeline, with Terabit (!) Ethernet on the horizon.

Even higher-end prosumer switches might have a “GBIC+” port into which you can plug a 10 Gbit transceiver, either copper or fiber. You won’t like the price, but that’s today, and Moore’s Law is inexorable on pricing too.

As “I want my video and peripherals everywhere” becomes more in demand, particularly if WiGig takes off, I predict that advanced users will start flooding existing networks, using up the formerly Good Enough Gigabit building backbone as they stream HD video from one conference room to another, move very large files from room to room, and generally slow everyone else down. Then, 10 Gbit wired Ethernet will come into its own—even at home—as a way to link your laptop to the big disk in the back room, the screen in the conference room, or even capture full-resolution HD video from your camera to the network.

Currently, 10 Gbit Ethernet over copper requires Cat6 (or better) cable, which has better crosstalk rejection and stricter standards for pair twisting. (Twisted pairs cut down on crosstalk: Look at a phone cable—no twists—versus an Ethernet cable sometime.) 1 Gbit Ethernet will run over Cat5 with few or no problems; no new wires were required when I moved from 10/100 Mbps Ethernet to 1 Gbit, but time will catch up with me—and probably you, if you haven’t re-cabled in the last five years.

I’ve probably got all the specifics wrong, but I will bet that well inside five years I’ll be writing about pulling out all the Good Enough Cat5 cable and replacing it with Cat6a. And all our new cables will be Cat6a or better, just in case.

The moral of this story is that when prices fall far enough, you’re probably better off replacing some perfectly good hardware with better: it will take advantage of all kinds of small improvements – think of the improved speeds we get with the new cable modem even though Time Warner hasn’t done any deliberate improvements here: it’s just as they replace older equipment with new, everything gets a little better, and the new modem takes advantage of that. Once TW gets around to DOCSIS3 in Studio City, I’ll one day get a dramatic Internet Speed improvement without having to do anything about it at all.

 The third installment of the September 2014 column will discuss Docking Stations, Living with Firefox, and the Bulging MacBook Air. Sign up for the newsletter to be notified when the next installment is published.

You may add your comments below; comments are moderated. Note that Dr. Pournelle may not respond to comments due to constraints of his time. You may use the Contact page to send email to Dr. Pournelle.

2 comments on “September 2014 Column – Part 2

  1. On the wear rate of SSDs.

    All modern SSDs have wear leveling algorithms. They keep track of how often each sector has been written to. Given drive sizes vs transfer rates, it is essentially impossible to wear out a sector of an SSD due to writes.

    Even if you wrote a teeny program that wrote to the same logical location on the disk absolutely as fast as the hardware will allow, the drive will notice that sector has garnered far higher than average write cycles. It will then copy a different logical sector to that physical spot. And then your little program will be writing to a fresh new sector.

    That will keep happening until you wrap around the entire disk and the cycle repeats. Writing at full speed, it will still take years to transfer enough information to the drive to wear out any given spot.

    In real life use, you don’t come anywhere near that level of throughput. Therefore, it will likely be some other component in the drive that dies, first.

    OTOH, thumb drives and SD cards do not have that kind of intelligence built in. So, if you use the Windows feature (I forget the term) to use a thumb drive for temporary storage, that device could very well have a spot go bad from write fatigue.

  2. On SSD endurance a lot has been written but less actually tested. It seems each vendors firmware and chips affect the cells differently.

    This endurance test was very interesting and pretty much assured me that in almost all single user desktop/laptop situations you’re not going to break an SSD any quicker than a mechanical hard drive, and in most cases an SSD will easily outlive an HDD.

    http://techreport.com/review/26058/the-ssd-endurance-experiment-data-retention-after-600tb

Comments are closed.