Detecting Vulnerable “Internet of Things”

magnifier-492539So the big news last week was the giant attack by the Mirai malware/botson Dyn that effectively killed (well, seriously wounded) the Internet for a lot of people. And that the “Internet of Things” (IoT) was the source of the attack, because of bad security practices (devices with ‘backdoors’ and default passwords) on those devices.

I’m not going to explain what happened. If you are interested in this subject, you probably already know that the attack was done by the Mirai malware. The rest of you can ask the googles if you need an explanation of what happened.

And I am not going to explain how Mirai works, or that you can get a copy of the Mirai malware source code.

The thing that is not clear to many people:how can you check to see if your devices on your network, whether home or work, are susceptible to the attack by the Mirai attack.

The basic attack is through specific ports on your network, visible to the outside (external to your network) to devices ‘inside’ your network. So to test if your network is vulnerable, you need to check from the ‘outside’ of your network.

To do this check from the ‘outside’, I recommend the venerable (fancy term for old) “ShieldsUp” check from Gibson Research. This is a free tool that will scan for open ports on your network (this should work on any OS or network).

But, before you do that, make sure you have the permission of the owners of your network. Attacking – or even scanning – a network you do not own can be a felony in the US, and probably other countries. So, before you proceed, make sure that you have the networks’ owners’ permission.

You can check your own home network, though, since you are the owner. But, again, only do this scan on networks you own, even though the scan is very benign.

You can find the Gibson Research “ShieldsUp” tool at http://bit.ly/2dA9Ubd. Carefully read the information on that page. (For instance, that page will show you your unique identification that every web site can find out. Even the ‘private’ function of your browser will disclose that information. Again, read the page carefully to understand the implications.)

Once you have read the info on that page, click on the “Proceed” button (either one). On the next page, read the information, then click the orange button to check your exposure to UPnP (Universal Plug and Play).

image

The test will take under a minute, then the result will be displayed. If your network is OK for that test, you’ll get a nice green message. That’s good. If your network has problems, there will be some explanation of what you should do. We’re not going to go into any of that “What You Should Do” stuff, it’s pretty deep and complicated.

The next step is to check for any open ‘ports’ on your network. Go back to the testing page (the page you saw when you clicked on the “Proceed” button). On that screen, these series of buttons are the next step.

image

Run the “Common Ports” test first. Then run the “All Service Ports”. As with the first test, you are looking for all ‘green’ results. Any bad results will be listed, along with explanations. Again, we aren’t going to explain things here; if you need more info, look at the site’s explanations, and ask the googles if needed.

On my computer on my home network (which I own, so I have permission to scan my network), I got ‘all green’, as shown in this screen shot:

image

Hopefully, you will too. If you don’t, then proceed from there.

Building Eugene


Eric Pobirs, a Chaos Manor Advisor, details his build of a new computer system for Chaos Manor.

ComputerIt’s been a while since we built a new PC at Chaos Manor. Along with the expected incremental improvements to the platform, two major items have now come to the consumer market. The first is PCIe-connected SSD storage that raises performance to levels that will take years to fully exploit. The second is USB 3.1, which brings double the bandwidth, a new simpler plug type, and a variety of new modes for delivering power.

I’d like to say I carefully selected each component in minute detail to assemble the best possible machine, but I really only had one item in mind when I started. The make and model of that was by default as it is the only one you can reliably buy at the moment. [The component list is at the end of this article – Editor]

The fact is, with a decent knowledge of PC tech trends it doesn’t take much discretion to produce a very nice machine if you don’t have anything terribly specific in mind. Also, it was the right time of year to assemble parts as this was leading into Black Friday and sales galore. I also picked up several portions of another build (an FM2+ HTPC) with just one vital component needed to proceed but as a bargain hunter I must exercise patience.

Eugene is Named

The name Eugene comes from Eugene ‘Flash’ Thompson, Peter ‘Spider-man’ Parker’s high school nemesis and later close friend. Back in the 60s and 70s he went off to Vietnam, and then returned without a scratch and a hot wife in tow. More recently he was retconned to be sent to Iraq, have his lower legs blown off by an IED, and then become the new host of the Venom symbiotic to become a superhero in his own right.

ANYWAY…. Flash memory is a really big part of this machine, so I used that to inspire the name. The fact that it is also the middle name of Chaos Manor’s proprietor is purely coincidence. Probably.

The CPU and Motherboard

This new build had been a near future plan for a while but the combined special on the ASUS Z170-AR motherboard and Intel Core i5-6500, combined with the knowledge the Samsung 950 was available, compelled me to pull the trigger. The rest of the components, nearly all offering an attractive price, quickly fell into place. Even the SSD, while commanding a price premium compared to SATA models of comparable capacity, had a decent discount at a time when most retailers aren’t bothering due to demand outstripping supply for the moment.

The CPU belongs to the most recent Sky lake generation, the sixth since the Core i3/5/7 line began, as indicated by the model number. The lack of a ‘K’ at the end of model number means it has a locked multiplier, the determinant of the processor’s internal clock rate compared to the rate at which it communicates with the rest of the system, and is thus unsuited for overclocking.

That became a source of difficulties.

Overclocking Woes

Recently, motherboard makers have found a method of overclocking CPUs with locked multipliers due to a difference in how Skylake and its chip set work compared to previous generations but this is too recent to be a factor in the board we used. ASUS has released new firmware supporting the technique for several other of their ZI70 board but not for this particular one as yet. Which is alright as testing PCs to destruction is a sport I’ll leave to others.

Motherboard Improvements

The 6500 has a number of improvements over its predecessors, most of which are aimed at the mobile sector but some are universal, such as significant improvements to the video subsystem that in turn raises the ante for the entire PC market as Intel IGA (Integrated Graphics Adapter) remains the lowest common denominator that developers look to in their design decisions. PCIe 3.0 is standard, raising the overall bandwidth available with slightly less than 8 Gbps per lane where PCIe 2.0 delivered 5 Gbps with substantially greater overhead.

Another platform advance is DDR4 memory, with lower power, greater speed, and higher density potential. DDR3 topped out at 128 GB per module, and DDR4 beats that by a factor of 4, going all the way up to 512 GB in a single slot. While this isn’t going to figure into consumer desktops any year soon, it does translate into systems that are both more affordable and more powerful once the market shifts to the new standard. DDR4 prices have already dropped around 50% from early 2015. Intel does support the use of DDR3 with Skylake for low cost systems but the cost advantage should disappear within 2016.

One big feature that didn’t make it into Skylake is USB 3.1. As with USB 3.0 and USB 2.0, Intel is rarely in a hurry to integrate barely dry standards and this leaves the market open for third party chips on motherboards and add-in cards. Since Skylake uses a new socket, LGA1151, requiring newly designed boards all around, all but the cheapest feature USB 3.1 via a third party chip.

The most common is the Asmedia ASM1142, the latest from a longtime supplier of USB ICs. This chip connects to the system by a single PCIe 3.0 lane. At first glance this would seem inadequate to support the 10 Gbps bandwidth of USB 3.1 but the protocol overhead between the USB host and devices is great enough that the single lane is sufficient to handle the traffic to the rest of the system. The upcoming Kaby Lake generation from Intel is due to have USB 3.1 as a native feature and may perform better but probably not so much that it will be easily perceivable.

SSD Slots

A big part of this generation isn’t really new but is entering the mainstream with Skylake and its motherboards is the M.2 slot for SSDs. The spec has appeared in high-end laptops for a while, and in the X99 generation of motherboards but these implementations were mostly either SATA or using older, slower PCIe with fewer lanes.

The fully realized version seen in Z170 systems utilizes four lanes of PCIe to connect to the system with a raw bandwidth of 3.2 GBps. That’s gigabyte with a capital G and capital B. This is such a massive gain in performance that it is wasted much of the time as the load from typical desktop apps doesn’t provide much challenge.

Memory Expands

When systems with 32 GB or more of RAM are used with applications handling data measured in the tens of gigabytes, the difference will be obvious. Easy enough for serious top resolution video editing or 3D animation rendering but most of us aren’t creating the next Pixar feature.

It will take a while for clever engineers to find ways to make this more useful to the average user. The difference between this and a SATA-III SSD is noticeable in various operations that complete with unexpected speed but it is subtle compared to the dramatic difference when first going from a spinning platter drive to an SSD.

How future PCs will exploit this performance remains a matter for speculation. If Intel and Microsoft have a concrete idea it hasn’t been publicized much, likely for fear of the Osborne Effect.

Intel says their Optane, non-volatile memory that is supposed to hit the market in 2016, will be a thousand times faster and ten times denser than flash memory. Applied to the M.2 form factor used by the Samsung 950 in Eugene, this would allow for a 5 TB drive about the size of a few stacked business cards.

Intel has suggested that one way this will be sold is as DIMM memory modules rather than treated as drives for secondary storage. This means the Optane would become part of the memory map alongside the system’s RAM.

This where 64-bit addressing really takes off and the ability of DDR4 to support a 512 GB DIMM sees practical use in a consumer PC. (This won’t be using the slots in existing DDR4 motherboards but rather a new extended spec Intel is introducing. These will appear first in servers where the user base is better equipped to deal with the new concepts required. Just to stir things up further, Intel has suggested a future PC might not have any volatile DRAM at all, though this would be severely limiting for video performance.)

The earliest PC implementation would likely treat the Optane block as a virtual drive – RAM disks, as they used to be called back when floppies stalked the earth. Apps and data would still load into RAM to be used. Later, as the OS and apps catch up, the mapped memory would be treated as normal memory that is a little slower and has apps in suspension waiting to be invoked.

The difference between having the app running from RAM and running from Optane will be small enough that it will require some design work to make the system smart enough to know which apps are deserving of the greater speed of RAM vs. the risk of data loss from power failure. That is all in the future but those trying to figure out how to make this work have the PCIe SSDs now to start the process.

Old Amiga hands may find some familiarity in this. On the Amiga there was Chip RAM and Fast RAM. Chip RAM was shared with the co-processors performing audio and video tasks while Fast RAM was solely seen by the CPU. This was an important consideration when performing a CPU intensive task or allocated memory for a RAM disk. Twenty years later a similar situation is arising, although at orders of magnitude greater amounts of memory and speeds.

Although the ATTO benchmarks are stunning, the PCIe SSD is a bit of a letdown for casual use in a desktop. The advantage for mobile PCs and very small systems like an Intel NUC is obvious. Unless one is building RAIDs of 3.5” drive and/or installing multiple video cards, there is little reason for a full feature desktop PC to be much larger than those currently targeting HTPC use.

Perhaps I’ve gotten spoiled.

SSD Drives

Last night I updated an older 2007 laptop that hadn’t been turned on in months. It uses SATA-II, so installing an SSD is of limited value. It felt excruciatingly slow compared to the HP AMD A6 laptop with an SSD. If I strain my memory, I can recall that the initial transition from parallel hard drives to SATA didn’t feel like much of an upgrade in casual use, even if the benchmarks made it clear there was quite a difference, especially if RAID were used.

So is M.2 too much of a good thing? Might there not be a place for something faster than SATA but within the realm of current system ability to exploit, less demanding on system resources, and perhaps less of a pain in the wallet? Much of the industry thought this might be the case and the ASUS, like nearly all Z170 boards but the smallest, has what they came up with as a solution. The only problem is you’ll never have the option to use it.

On these boards you can spot what appears to be a pair of SATA ports and a third smaller port, positioned away slightly from the other SATA ports. These three are a standard called SATA Express. If it looks like two SATA ports ganged up to form one port with twice the bandwidth, well, it pretty much is, along with some ideas borrowed from the Serial Attached SCSI (SAS) format use in high-end storage systems found in enterprise data centers.

The two ports can be used separately as normal SATA-III 6 Gbps connections. But given a SATA Express device, the pair and their little friend can be used to provide a 12 Gbps connection to an SSD which should be significantly lower priced than an equivalent capacity M.2 drive, thanks to being able to use lesser performance flash memory in the roomier and less thermally sensitive 2.5” hard drive form factor.

The design also allows for transitional devices that use existing controllers to offer more savings while raising the performance ceiling over SATA. The idea was that a desktop PC could have a single SATA Express drive for its primary boot and app volume and still have plenty of SATA ports left over for more hard drives and optical drives.

Sounds great, but for the problem that nobody is producing SATA Express devices. Although a few devices were demonstrated at trade events, none of the companies with the level of market power needed to place sufficiently large orders for the new parts involved was willing to commit.

The looming approach of M.2 and its awesome numbers overwhelmed the other practical considerations in favor of SATA Express. So you’ll likely never use SATA Express and since most people don’t utilize the entire bank of six SATA connectors, it comes off as rather a waste.

But it isn’t a complete wash. Some clever folks felt the same concern and found a way to utilize the SATA Express port: http://bit.ly/1ZTM72s

Because you cannot have too many USB 3.1 ports, can you?

Although this is a lost generation for SATA Express, the story doesn’t end there.

SATA Express

A new spec, building on SATA Express but leaving behind the transitional elements and implementing a 3.2 GBps interface compatible with M.2, is coming, possibly as soon as Intel’s Kaby Lake generation but certainly in the following Cannonlake.

The new spec is called U.2, and has the same four lanes of PCI-e 3.0 providing the bandwidth. The main difference between U.2 and M.2 is the use of the 2.5” hard drive form factor, allowing for higher capacities and less concern about heat. This should give future U.2 drives a small cost advantage over M.2 drive, making them the preference for desktop builds while M.2 owns the mobile and HTPC spaces.

Some Build Challenges

The build had a few challenges along the way, even though most of the parts were a real pleasure to use.

The Corsair power supply is simple and competent, offering all the plugs needed for a system drawing the level of wattage it can deliver. Eugene as currently configured comes nowhere near that and there is extensive untapped potential for adding drives, video cards, etc.

A modular PSU would have been nicer, eliminating the need to find places to tuck away unused cables but it would have been significantly more expensive and might even be a bad choice for those who have difficulty keeping track of the spare parts from their PC.

The Thermaltake Chaser A71 case has an obvious heritage in common with the cases used for Alien Artifact and Swan systems, but is much less expensive. Most of the major attractions are there, even so. Removable bottom air filter, four front panel USB ports divided between 2.0 and 3.0, SATA drive docking station on the top front, removable left panel for easy routing of cables behind the motherboard tray, removable drive carriers, additional fan spacer to ensure a liquid cooling radiator has enough clearance for a card in the x16 slot most often used for video, and more. It is a very nice case for the money.

Cooling Challenges

The cooling system is where I first encountered difficulty. This was my first time building a system with liquid cooling and it involved having to consider things like air flow that had been easily taken for granted before.

The package includes a fan to push air through the radiator and the instructions detail how an existing rear fan can be used in conjunction to pull air through to the outside world. But the screws needed to mount both fans this way are not included, just enough for one or the other. That should be adequate for Jerry’s usage but is a bit disappointing. At the least, they could have detailed the specs of the needed screws to simplify obtain them for yourself.

The packaging of the liquid cooler indicated it was usable on several socket types used for recent Intel and AMD systems, including LGA1150. All of my research indicates anything designed for the mechanical aspects of LGA1150 should also work with Skylake’s LGA1151 but if that is true I have to wonder about the accuracy of this mounting system.

It took a good deal of wrestling to get it screwed in place and there was considerable added anxiety when the story broke around this time of Skylake processors being damaged by cooling system mounting mishaps. It was not this brand of equipment but it was a worry I could have done without.

A Clock Too Fast

The next big issue was caused by the motherboard being very clever in its design but the accompanying documentation far less so.

Overclocking, running a processor at greater than its factory specified rate, is a big emphasis on gaming oriented boards. The cutting edge of new game releases challenge PC performance and the dedicated gamer can never have too much horsepower in his box or try to squeeze more from those ponies. This is all well and good but can become an annoyance for those wanting the overall quality and features of the high-end board without risking stability or outright destruction of components.

On the board is a three position switch, offering Off, TPU I, and TPU II. According to the manual this is to maximize CPU performance by specifying the method installed: TPU I for air cooling and TPU II for liquid cooling. This seemed in keeping with the dedicated power header for liquid cooling pumps also featured on the board. Nowhere in the description was it overtly stated that this was solely for use with overclocking and it should be left in the Off position if one doesn’t intend to overclock.

When the time came to power up the system for the first time, it automatically chose a setting (15%) that was too unstable to allow for an OS installed. Poking around in the firmware turned no obvious method to simply turn off overclocking. The best I could do was to reduce the overclock to 12%. This was fairly stable and allowed Windows to be installed but ran into problems if the machine was allowed to go into hibernation mode. A forced shutdown would be needed to regain use of the computer.

I was stubbornly determined to find the answer on my own but once I relented I soon learned of the TPU switch’s true function from a hardware site forum. Turn TPU to Off and Eugene has run reliably at the proper speed ever since.

Front Panel Irritations

The next issue had to be my fault but raises an issue that has annoyed me for years. Motherboards are still using the same block of pins method for connecting front panel functions after nearly thirty years. The ASUS motherboard included a pin block that greatly simplifies the task because you make all of the connections and place it on the mother board as a single item.

But there can still be problems. The connector for reset button is two side by side pins but the reset on the motherboard is three pins. You only need two but getting the right two was surprisingly difficult. I began to wonder if the case and motherboard were incompatible.

In the past I’d gotten past this by doing a little surgery on the case connector, cutting the end in half so the pins could be placed individually. I wanted to avoid that this time and with some searching found a two pin male to female extender. Annoyingly, these were only sold in groups of four and worse, I ended up not needing them.

I did some research to assure myself that I understood which pins needed connecting and set about doing it over. This time it worked.

Why I had to repeat this was a mystery I’ll chalk up to cosmic rays affecting my brain. Yet, why was this necessary at all? Why isn’t there a single standardized connector to get the essential PC case functions set up? Power, power LED, reset, and HDD activity. Those four are a given on every PC. It’s long since time a single keyed connector, similar to those for front panel USB and audio ports, was standardized for everyone to use.

Wrapping Up

Complaints aside, this was a very enjoyable build. Much of the future was revealed, between incredible storage speed and native video support for 4K displays. An LG MU27 27” 4K monitor was obtained to put the latter through its paces and we’ll be reporting on that experience in the future. Suffice to say, the 1080p screens I once dreamt of possessing are now merely OK.

We welcome comments about this build. And if you have built a nice system, we invite you to share your experiences with other CMR readers. Learn how you can submit your article to CMR here.

Parts List

Here are the components used in the building of Eugene, along with shortened links to the product pages. All links will open in a new window.

CPU: Intel Core i5-6500 : http://intel.ly/1ZTM86w

Case: Thermaltake Chaser A71 : http://bit.ly/1ZTM72u

PSU: Corsair CX750w : http://bit.ly/1ZTM86x

Motherboard: ASUS Z170-AR : http://bit.ly/1ZTM86y

SSD: Samsung 950 Pro : http://bit.ly/1ZTM86D

Cooling: Thermaltake Water 3.0 liquid cooling : http://bit.ly/1ZTM72z

Optical drive: LG 16X BD-RW : http://bit.ly/1ZTM7iR

RAM: http://bit.ly/1ZTM7iS and http://bit.ly/1ZTM7iU

Display: LG 27MU67 4K IPS monitor : http://bit.ly/1ZTM7iW

Chaos Manor Network Attached Storage Upgrade And Retirements – Part 3

Now that Eric has finished the NAS project installation and configuration, it was time to consolidate data from various systems, upgrade others, and consider system retirements. Part 3 of the project continues with Eric’s narrative:

typewriter-297383_1280With the NAS/RAID project completed, the next step was to track down the various elder PCs that were serving as network backup locations, back them up to the NAS, then shut them down in hopes of making a dent in the frightening power bill that had been shrugged off in the days when Chaos Manor was like Frankenstein’s Castle for PCs, with a gruesome experiment running in every corner.

Some of these machines were quite long in the tooth and were due to go off to the farm where they could run and play with the other PCs. They weren’t in operation and adding to the power bill but it was time to clear some space.

First up were the Satine and Roxanne systems. Satine was a Socket 939 (indicating a long ago AMD CPU generation) system with an Nvidia 6000 series video card and a pair of 500 GB drives that may have been mirrored.

The googles note an early mention of the Roxanne system in February 2008: http://goo.gl/lxT07M : “Roxanne, the Vista system that was the main writing machine before Isobel”.

An early mention of Satine – nee’ Sativa – is around July 2006: http://goo.gl/gy1lP3 , also here http://goo.gl/zwR0e8 . Earlier mentions could be in the Chaos Manor columns printed in Byte magazine, but web versions of those sites are not available, at least with a quick search.

Beyond that it was hard to say because Satine was inoperable. It would spin up the fans and do everything preparatory to booting but never would. Not so much as a POST beep of any kind. The properties of some system files indicated it was running Windows XP SP2, so there was probably little value there for anyone beyond salvaging the drives.

So the hard drives were backed up and formatted, placed back in the case, and Satine remained as a project for someone with a strange combination of motive and nothing better to do.

Roxanne was more promising. She had last seen life as Mrs. Pournelle’s workstation but had been replaced by a new build when the case vents had become clogged, causing poor Roxanne to overheat before System Restore could run long enough to repair the damage from the first overheating.

Even with the vents cleared the Pentium 4 HT system was rather loud and hot. It isn’t clear to me whether this was always the case and not a bother to Jerry’s artillery-blasted hearing or if it had become compromised at some point.

Certainly it wasn’t worth any significant investment to replace any of the cooling bits. But it was running Windows 7, raising the question of whether it could become a Windows 10 system with a potentially long life ahead. Therein the saga lies.

Updates and Upgrades

On both Vista and Windows 7 there was just the one Service Pack. (Windows 7 may still get a second due to its business footprint but I’m not holding my breath.) Going back to NT 4, Service Packs were once produced far more frequently. Internet access was far less widespread and the need to store updates locally for installing on numerous machines was far higher. It was especially helpful if a Service Pack replaced its predecessor. Install SP4, and SP2 and SP3 were included in that update.

As the internet and live Windows Update downloads became more the standard, it became more of a hassle to update a new install or a machine that had been offline for a long period. By the time of Windows 7 in the days after Windows 8 had launched, this had gotten a bit painful.

Businesses of the scale to have their own WSUS setup [Windows Software Update Server, a ‘personal’ Windows Update system to centralize and manage updates across the business environment] or enough identical machines to use an updated image weren’t bad off but supporting the SOHO market got annoying. I had one experience where several refurb PCs that came with Windows 7 SP1 needed almost 1.5 GB of downloads to be fully updated. This was a very slow process regardless of how good your broadband speed might be.

Well, Roxanne hadn’t been to Windows Update in over two years. On the first attempt it spent nearly two hours figuring out which updates it needed before I noticed the gas gauge animation had stopped. The Update app was frozen. Start again.

This time it was a bit better. It installed about 150 updates before stopping and announcing it had an error it could not work past.

Along the way, the Windows Defender anti-virus scan announced finding some malware. It was deleted but found several other copies as it worked through the drives. Roxanne had received copies of older machine’s content in the process of originally entering service and consequently the malware implanted itself in each copy it found of certain system files. This was one of those packages that would break Windows Update as part of its activities. Why I was able to get as many updates installed as I did before it kicked in is a mystery.

This meant going off and searching out the ‘fixit’ app to correct the damage. Still more downloads. Then Windows Update failed with a different error.

At this point Roxanne had been updating, more or less, for about 20 hours. (This could have been reduced some if I’d been there to respond every time it needed human input but I have this congenital condition that requires me to spend a certain portion of each day unconscious. It’s very inconvenient. The doctors call it ‘sleep’ and can only offer short-term mitigation.)

A Different Fix Needed

This time a different fix was needed but it was soon found and applied. Finally, almost a day after starting this quest, it was done. Roxanne was as up to date as a Windows 7 machine could be at that moment in early September of 2015.

But where was the Windows 10 upgrade offer in the notification area? It should have been in the last batch of updates. Apparently this issue comes up often enough that there is an app from Microsoft that looks at your machine and determines if it is eligible, and installs the upgrade app if so.

Roxanne wasn’t eligible for the Windows 10 upgrade for two reasons. One was correctable, the other was not. At least, not at reasonable cost, which in this case is not anything over $0.

The Nvidia FX5700 video card had long since fallen out of the range supported by the company after Windows 7. This could be fixed by replacing it with a newer card that would still be old enough to be free or of negligible cost. The other problem was the Pentium 4 HT CPU. It was too old to have NX Bit support.  https://goo.gl/4dgKB0

headstone-312540_1280Considering how much past malware misery could have been prevented if this NX Bit feature had become common much earlier in microprocessors, it represents a perfectly reasonable place for Microsoft to say “here and no farther” when it comes to antique hardware. The last generation of Pentium 4 did have the NX Bit (Intel calls it XD bit) added but this was after Roxanne’s CPU came out of the foundry.

So there it ends for Roxanne. She may find a home yet and brighten some life but we are done with her and bid her farewell.

Wrapping Up

The project met the need of providing a centralized and more efficient data storage for the Chaos Manor network. By consolidating data into that central location with high capacity drives, Chaos Manor gains efficiencies in data storage (data is not in several different places, with the attendant work of synchronizing data amongst several systems). It also makes backing up that data more centralized – which will be the focus of an upcoming project.

Using a RAID 6 configuration allows for data reliability, although you shouldn’t rely on just a RAID 6 for backup or data recovery, it is more of a reliable centralized storage system. You really need to have a process in place for off-network storage of your important data.

As for older computer systems: there’s “old” and then there is “really old”. Determining the moving target that divides those is necessary to decide whether a system should be donated or simply sent to e-waste.

And so ends this project. We hope that it has provided you with useful information, and perhaps some thought for a similar project of your own. We’re always looking for guest authors, see here. Let us know what you think in the comments below, and share this article with others.

Chaos Manor Network Attached Storage Upgrade And Retirements – Part 2

We continue with the story of the new NAS system at Chaos Manor.

Home-Server-iconAs the project got closer, Eric did some more research about the initial setup of a RAID 6 system.

From what I’ve read on Netgear’s support forum, the initialization of the RAID 6 setup can take a LONG time but since the unit isn’t in active use yet this will only be a mild annoyance.

Eric noticed a Netgear RN10400 RAID system was advertised by Fry’s, so he decided that would be appropriate for the project. (See http://goo.gl/lYAl2p for product details.)

There wasn’t so much a survey [of system possibilities] as a Fry’s ad offering the unit for approximately $100 off its normal price. At the time I grabbed it there were potentially four different places it might have gone. If I had used it as a media server at home I would have gone for a JBOD configuration to maximize capacity as nothing on would have been irreplaceable.

The initial setup [of the RN10400] was very straightforward. A full set of four 4 TB drives from Seagate (Model ST4000VN000) were purchased and installed in the carriers. This is familiar to anyone who has done much PC construction with cases that provide removable carriers. Once the drive is installed in the carrier, it slides into its slot and connects to the SATA interface.

Upon powering up the NAS defaults to a RAID 5 configuration and immediately sets about verifying the drives and creating the RAID. This would eventually result in a capacity of slightly over 10 TB of storage capacity. Netgear offers an app called RAIDar that searches the LAN for any of the company’s NAS products and reports their address and status. From there you can log into the NAS itself and see what progress it has made and specify a wide variety of options, such as user accounts, access limits, and backups to other NAS or offsite via cloud services. This is also where you’d break up the default volume to recreate in a different RAID configuration.
That part that got tricky was making sense of the instructions. I cannot say if Netgear has updated the firmware and the PDF manual hadn’t caught up yet or it had always been wrong. It also may be intended for a different model and put in this version by mistake. The method detailed for destroying the existing volume simply didn’t apply to the interface presented.

The System Setup and Configuration

Eric continued the work on the Netgear ReadyNAS 104 4-bay NAS system.netgear-rn10200-3-4lft-photo-large

After much consideration of the choices, I decided to accept safety over capacity and move the Netgear’s four 4 TB drives to a RAID 6 configuration. This effectively means that half of the volume’s capacity is consumed by the non-data use of sector to insure the volume can be reconstructed without loss if any one drive should fail.

Eventually I noted the gear icon that had appeared elsewhere and found it was an active link to bring up an otherwise undocumented menu. From there the task more or less matched that described by the manual. The four drives were applied to a new RAID 6 volume which would offer 7.6 TB after 60 hours of setup. They weren’t kidding. It took that long. A search of the forums on the Netgear site produced a post that explained the RAID 6 setup had far more overhead than RAID 5 or lower, and that this was one of the reason their higher capacity NAS products used Intel processors with specialized function for the task.

Another optional setting was ‘Bit rot protection.’ This tries to head off failures on the drive media before data is lost but they warn of a potentially dire performance hit. I turned it on and did note that when uploading the backups from the recently retired machines the throughput was frequently awful, often descending low enough to be measured in KB. Before the next big upload I’m going to switch this feature off to see if it is the culprit. Once the bulk of initial backups are made it may have little bearing on day to day use but we’ll just have to watch and see. I recall some line about those who would sacrifice performance for data security may end up with neither.

Because this is a fairly inexpensive model using an ARM-based SoC [ARM processor, System-on-Chip – think Raspberry PI, which also uses an ARM-based SoC], the process of creating the new volume was quite lengthy, coming in at 60 hours. High-end models most often use Intel processors with substantially better performance in the specific areas complex RAIDs place their demand.

SOC is System On Chip. ARM processors are rarely found all by themselves. The myriad licensees package them with other functions, licensed from other companies or created by themselves, to produce the device specific to the intended purpose. This has become increasingly common across the industry, especially in the consumer sector where the other functions are a given.

Thus most Intel processors these days have a GPU included, and numerous other functions that were formerly found in separate chips. Due to the way ARM’s business model works, an ARM-based device is likely to be far more customized than something out of Intel as it simply isn’t worth Intel’s time to pursue markets below a certain size. Intel does have a line within the ATOM product range that is aimed at NAS builders but at a significantly great cost than ARM-based competitors. You’ll find these on higher end NAS models with a greater range of abilities than were needed here. For example, some high powered NAS products can act as a self-contained media library with the hardware to drive a TV all by themselves rather than relying on a client device.

As Eric worked on the NAS/RAID setup, he set up the volumes as RAID 6 instead of the default RAID 5.

The Netgear ReadyNAS 104 was populated with 4 TB Seagate NAS drives. This, when it finally finishes, will offer 7.6 TB of space to work with

I intend to write at greater length about it but was waiting until the change was done and more material might be added. There is a lot of capability in the device, making for a dizzying array of choices for how best to make use of it.

I’ll also make note that the 60 hours needed to perform the RAID 6 volume creation would be greatly reduced on higher end product using the Intel products that have been targeted at the NAS business. This model, and most like it in price and features, use ARM chips that cannot match the Intel products on the more computationally intense RAID configurations, according to Netgear reps in their forums.

Minor Annoyances Solved

There was some initial difficulty with the NAS alerts emails not getting to Dr. Pournelle’s account; they were going to Eric’s email account. Eric noted that, along with some additional info on the configuration.

Currently the email alerts are coming to me [Eric]. The mystery is why Jerry’s account wouldn’t work using the same setting displayed Outlook remains unsolved.

The NAS has a pair of gigabit Ethernet ports and is administered through an intranet web UI. It supports the installation of apps to add custom features but a brief survey didn’t make any stand out for Jerry’s needs.
Netgear warns that RAID 6 on this product range will significantly reduce write performance but with so few users on the Chaos Manor network this isn’t likely to be a problem. After the initial backup from an actively used PC the following updates should be quick, unless a complete image is being made every time.

The Swan system has an eSATA drive docking slot that lends itself better to that level of backup. A USB 3 external drive should suffice for AlienArtifact and the same drive could also serve the Surface Pro 3 since it has so little storage capacity compared to the desktop systems.

As the project continued, with status reports to the Advisors for their reading pleasure, Advisor Peter wondered:

I’m still curious to know how much of that gigabit capacity can be delivered over the Chaos Manor network, but it probably won’t come very close to the theoretical performance of the array itself, which should be well above 1 GBPS for reads, at least.

That’s really saying something, since I think all the computation is just XOR operations and table lookups in normal operations. But cheap ARM chips have few of the data-movement and vector-processing features of Intel’s cheapest processors, and that’s probably the real issue rather than “computation” per se.

Advisor Brian Bilbrey chimed in about initial and ongoing performance of the system.

A reduced write performance is primarily an issue on initial seeding of the device. Jerry undoubtedly has a lot of data he’d like to centralize, and getting all of that on the first pass will be the ordeal. Keeping it updated thereafter is less likely to be affected by the performance issues.

Eric continued the discussion:

I won’t be surprised if the network speed is much more of a limiting factor and the difference in write performance impossible to discern without much bigger loads than we’re ever likely to create.

Perhaps [it could be measured] if we tried to save an image from every workstation simultaneously. None of those machines has more than 1.25 TB of local storage (250-ish GB SSD and 1 TB hard drive) and most of that empty. (There is 192 GB on the Surface if you include the microSD card.)

If we did full backups every night there would be plenty of time to avoid overlap.

The saga continues in the final installment, as data is moved from systems, and older systems are considered for retirement. Let us know what you think in the comments below, and share this article with others.

Chaos Manor Network Attached Storage Upgrade And Retirements – Part 1

In this three-part series, the Chaos Manor gets an upgrade of its data storage with the installation of a new Network Attached Storage system. Along the way, several old systems are deemed retirement candidates. We start with Part 1, where the project and features are discussed amongst the Chaos Manor Advisors:

Network-Nas-iconThe Chaos Manor Advisors determined that Dr. Pournelle’s various systems needed a cleanup, consolidation, and upgrade of at Chaos Manor. There was a need for a better backup and archiving process, along with some retirement of systems. Consolidating systems and data storage in light of Dr. Pournelle’s mobility problems was another objective, now that the new Chaos Manor wireless network was in place.

One aspect of this consolidation was to create a Network Attached Storage system that had RAID capability to serve as centralized data storage. A backup process was also needed. And there was a need for the archive to be protected against any encrypting-type malware. Although Dr. Pournelle practices ‘safe computing’ that reduces that risk, a protected backup of his data, including past and future books, was deemed to be a good objective for this project. We thought that this similar project would be interesting for Chaos Manor Reviews readers.

Chaos Manor Advisor Eric Pobirs, an experienced technician that works with Dr. Pournelle’s son Alex (as well as doing some freelancing) took the lead on this project. A discussion among the Advisors discussed the configuration and issues involved in creating this NAS/RAID system.

Eric started out with his general objectives:

Well, the idea was to have capacity wildly in excess of need to reduce the amount of management concern it generates once it has been configured fully. The difference in cost for the somewhat safer lower capacity drives is fairly minor and they’d still be at risk for the Rebuild+URE [more on this below] problem. So doubling up on the 4 TB drives in RAID 6 likely works out better than say, 2 TB drives in RAID 5, for a difference of around $100 for the set.

Part of this was offering the example of just how amazingly cheap this stuff has gotten over the years and how a bit of research can lead to better results without massive expense. Now, some might regard this investment as massive expense but creating organized bytes is Jerry’s livelihood, so this is just insurance. Also, the use of better qualified equipment should win back some of the expenditure in reduced power costs for the house. A few percent here, a few percent there…

After some thought, Eric came up with the outlines of a plan.

NAS and Backups

The main objective of this project was to determine how to configure a backup system for all of the computers at Chaos Manor. Backups are important, and Dr. Pournelle has lots of data: massive amounts of emails and his equally massive book files.

netgear-rn10200-3-4lft-photo-largeAfter a survey of the possibilities, Eric decided on a Network Attached Storage (NAS) system that consisted of Netgear ReadyNAS 104 4-bay NAS (http://goo.gl/lYAl2p )

Advisor Brian Bilbrey has much experience with large systems, being a senior systems administrator. He discussed the basics of the various types of RAID systems, beginning with an explanation of an “URE”, an ‘Unrecoverable Read Error’:

Magnetic disk storage sizes are now on the same order of magnitude as the quoted bit error rate for reading data from the disk. That is, if there are 4 TB on a disk, the chances of 1 in 10^14 Unrecoverable Read Error are pretty small. You don’t read a lot from your drive at any given time.

However, if you have an array of five 4 TB disks in a RAID 5 configuration, then you’ve got approximately 4 disks worth of data and one disks’ worth of calculated parity spread across all of the disks. If any ONE of those disks fails, then when you put in a new disk to rebuild that array, ALL 16 TB of bytes will be read to rebuild. There’s a significant chance that during that process, a read will fail. At that point, the array cannot be rebuilt. Data done and gone; restore from proper backups.
I recommend RAID 6 for 4 or more disks, and 2 or 3 way mirrors for 2 or 3 disk systems. Yes, you’re “throwing away” storage. Or, to put it another way, you’re managing the risk of data loss. With RAID 6, during the rebuild, you can lose a disk, suffer a URE during the rebuild, and still have all your data.

Personally, I also buy Enterprise-grade disks, because there’s usually another factor of 10 added to the URE reliability. For more info, use your favorite search engine and the phrase “URE RAID 5” without the quotes.

With that explanation, Brian continues:

One thing I’m pondering in light of the Rebuild+URE problem is whether a RAID 10 might be safer. This would be a two-drive stripe set mirrored by the second set of two drives. This cuts the raw capacity from 16 TB to a ‘mere’ 8 TB, which is still a vast capacity for storing primarily text files. In this case, recovering from a drive failure is more a matter of copying than a complex rebuild and the NAS should keep working with the intact set until the replacement drive is installed.

The Netgear box will also do RAID 6 with the four drives but as the capacity works out the same I find myself wondering what advantage remains, if any. RAID 10 may have the advantage in continued functionality after the loss of a single drive, whereas I have the impression a RAID 6 would be out of commission until the failed drive is replaced and the volume rebuilt.

In 234 pages the manual has remarkably little to say about drive failures, how to handle them, and how different configurations affect this.

Advisor Peter Glaskowsky agreed with Brian, adding:

To add to Brian’s reply, a RAID 6 array not only keeps working after a single drive failure, it still has redundancy– it becomes the equivalent of a RAID 5 array. Even two drive failures in a RAID 6 array will not stop the array from working.

So if you have an effective RAID 6 option, that’s my recommendation too. I know it’s painful to lose half your capacity, but in the long run, that’s better than losing all your data.

Brian added some additional thoughts about the various RAID types:

RAID 6: Lose one drive, you’re running degraded, and can rebuild at leisure. IF there’s a bit read failure during the rebuild, you have the second parity to fall back on.
Lose two drives (or lose a second drive during the rebuild after the loss of a first drive) and you’re running degraded, with no backstop. If you lose a third drive while rebuilding against a two-disk failure, you’re dead.

RAID 10 (and friends): Lose one drive, you’re running degraded. Rebuild, and hope there’s no bit read failure.
Lose two drives, and if it’s half of one mirror pair, and the OTHER half of the other mirror pair, and you’re still running degraded. But after one drive failure, you have a one-in three chance of catastrophic failure during the rebuild, should there be a bit read error.
The point of spinning storage is to have large data available for immediate access. Periodic copies of this data to spinning rust that is then stored offsite with which to rebuild if you lose the RAID 6 is prudent.

One of the considerations of a RAID system is that it is more of a centralized storage area than a full backup/restore solution. As Eric noted:

This article and its links cover the nature of the problem:
http://goo.gl/ungcQI
In short, drive capacity has advanced far faster than reliability and it may not be possible for reliability to ever be high enough to overcome the odds of a crippling failure. This is why RAID cannot be regarded as a backup but merely a way to centralize data to be backed up.

In the next installment, a system is selected, and installation and configuration is begun. Let us know what you think in the comments below, and share this article with others.

An Evolving Backup System

[Chaos Manor Reviews reader Drake Christensen was reading about the Network Attached Storage (NAS) system your editor set up with the Raspberry Pi, and decided to share his experiences over the years with his NAS and backup system configuration and practices. – Editor]

nas-backup-cloudI’m very happy with my backup system, which has evolved over many years, and I thought I’d share my experiences as I have enhanced and improved it over the years to its current configuration.  It may have been on one of the Chaos Manor Mail pages, where someone declared:  “If it’s not in at least three places then it’s not backed up”.

I use Windows at home. I’ve had a Network Attached Storage (NAS) on my home network since 2009. Even though they’re a little more expensive, I’m a big fan of NAS over external USB drives.  I worry that if Windows gets confused and trashes a drive, it may also trash any backup media that’s plugged into it.

Since a NAS is a separate computer, that provides a bit of a buffer to protect against the disk structure getting destroyed.  Also, just as a simple practical matter, the NAS can be accessed by multiple computers on my home network.  And, I don’t have external drives cluttering my work area.  The NAS is off in the corner, out of the way.

I started my home NAS with a two-drive D-Link DNS-321, which is still attached to my network.  This older NAS is limited to 2 TB drives. The D-Link is very hack-able, and I think I could have downloaded a custom build of the OS that would let me use larger drives. But I just don’t have time to add another hobby.

In early 2014, I added a new hot Windows box that I had built by a boutique builder, iBuyPower. My previous, circa 2009 iBuyPower box was relegated to be a secondary chat/email machine that I use on the same desk.

I needed more backup space, so I added a Synology DS213j (see information here; link to site?) This two-drive system currently has only one 4 TB drive in it.  The admin page of the Synology is quite a bit slicker than the D-Link, and it has a lot more optional features available through their interface.  And it is being actively updated.

With my first NAS, I used a conventional backup program, which did daily backups to the NAS.  But, for some reason, even though the data was growing only slowly over time, the backup was taking much, much longer.  Eventually, it was taking most of a day, even for an incremental backup.  I never did figure out what was causing the performance issues.

Updating over the Years

Somewhere around 2012 I started looking for a program to give me Apple Time Machine-like capabilities on Windows.  I wanted to have a file backed up within a few minutes of when it was modified, with multiple versions stored.  I tried one commercial product, Genie Timeline; it wasn’t horrible. But, I found its interface to be a bit too “cute,” for my taste, and I felt that it got in the way.

Eventually, I found AutoVer mentioned in several places.  It’s certainly not a pretty program, but I find it fairly straightforward.  I’ve been running that on a few machines ever since.  I set it to backup up my entire \Users\(me)\Roaming directory, plus my data drive.

During the first few weeks, it does require some attention.  Some programs, like Firefox and Evernote, for example, will touch some large files fairly often, which can quickly eat up space on the backup drive.  I was able to break up the backup task into three or four smaller pieces, with custom rules for each task, and greatly reduce the number of versions it keeps of those larger files.

Unfortunately, “Real Life” has encroached on the author of AutoVer, and it is teetering on the verge of abandonware.  He rarely logs in to his own forum, anymore. It is still reliable for me, and I’m still using it on two machines.

More Enhancements to My Backup System

When I purchased my latest machine I decided to find an alternative that appears to have more recent work done on it.  I ended up with Yadis! Backup.  Its interface is a bit more familiar and friendly.  I’ve been running it for about 18 months now.  The only issue I’ve had with Yadis! Backup is that over time the log file will grow huge and crash the program on start.  Every couple of months I’ve had to rename/delete the file, which clears up the problem.  I have contacted their tech support a couple of times and received reasonably prompt responses.

One wrinkle that I’ve recently solved is automatically logging into my network drives.  Apparently, when checking the “Automatically connect” box in Explorer, the order in which Windows attempts to log into the network shares vs loading its network drivers during boot results in an error, leaving me unconnected.  I had hacked together a quick Powershell script to do that, but I wasn’t happy with it. A few months ago, I started looking around, and found the open source NetDrives, a free utility that I can run on startup to connect the network shares when the OS is ready to take them.

So, that’s one extra backup copy.

Going to the Cloud

A couple of years ago I saw an ad for a cloud backup service, called Backblaze.  That got me started researching.  I found lots of good reports about Backblaze, but it was a little expensive for my use.  (I record some amateur sports videos, which greatly bulk up my data.)

Carbonite is well-known, but at the time I was looking at it, it was also very expensive for large backups. It has been long enough that I don’t remember the specific prices, at the time. I recall that one of my machines had over 600 GB on the data drive, and that Carbonite was in the several hundred dollar range for that much data.

I ended up with Crashplan, which gives me unlimited data on 10 machines for $149/year.  I added my mother’s machine to my account (and I set up a NAS for her, too.)  Crashplan is also Time-Machine-like, in that it backs up continuously, and keeps multiple versions.  I’ve actually made use of Crashplan to restore a couple of files.

I don’t want to sound like a commercial for Crashplan, but there are a couple of other features that are worth mentioning which have been useful to me in my configuration and usage. (As they say, Your Mileage May Vary.)

First, since all my machines are under the same account, if I were on the road, I could conceivably use my laptop to pull a file from the cloud that was saved from one of my desktops. They also have Android and iOS mobile apps to access files backed up in the cloud.

Crashplan can also back up from one Crashplan machine to another, whether local or remote. And, it can back up to physically attached drives. It does not appear capable of replacing AutoVer and Yadis! Backup to back up to a NAS, though, even when the shares are mapped to drive letters.

Cloud Backup Advantages

The prices and packages of all of these cloud systems have changed a lot since I looked at them a couple of years ago. Backblaze is now $50/yr/computer, unlimited. And, they offer a stolen computer locator service. Carbonite is $59/yr for the first computer, unlimited data with the exception of video. Video files and more computers are available for an added cost. All of them provide seeded backups (the option for you to send them a drive with an initial copy of your data.) And, there is an option for them to send you a recovery drive. In any case, do your homework before choosing your cloud backup service to see which best fits your needs.

Cloud systems like this also protect against ransomware.  Since it backs up only through the software service, ransomware has no way to get at that set of backup files to encrypt them. For a while, you might be backing up encrypted files. But, with this kind of versioning, you can get back to a good copy from prior to the infection. The NAS, on the other hand, is still vulnerable, if the virus looks to see what shares the machine is connected to. From a Windows point-of-view, the share is “Just Another Drive”.

An aside:  One thing I found when researching cloud backups is that there is one company that is poisoning Internet searches.  They have released four or five different programs which are nearly identical, but under different names and different pricing schemes.  And then they have paid for a large number of reviews, and commissioned a bunch of Top 5 and Top 10 lists with their programs listed near the top, to make them look a lot better than they are.  Digging a little deeper, there are a lot of complaints about that company – either bait and switch pricing, poor customer service or technical problems.

Wrapping Up

My current backup system is comprised of Network Attached Storage, Time Machine-like versioning for local backup (is there a more generic term for this sort of thing?) and a commercial Cloud versioning backup. With this system in place, I can set up multiple computers on my network for continuous backup.

There are a few things I really like about my backup system.

The first is, after the initial teething period, it is completely automatic.  I don’t have to remember to do anything.  It Just Works.

Also, the multiple versions have come in handy. I’m a bit of a packrat, and I like having multiple versions of stuff I’m actively working on. It’s only a few times a year that I have that breath-sucking, “Oh, no.” feeling when I saved instead of canceled. The versioning has saved me a few times. One example would be, when my mother made a change to her book collection database file, and didn’t tell me about it for over a week.  I was able to pull a version out of Crashplan from before the change. I chose to pull from Crashplan because it happened to be the first time I needed to get an old version of a file since installing it, and I wanted to try the interface. It worked about as I had expected.

Next, I like the speed of on-site storage, as my first place to restore from.

And, finally, it adds a lot of peace of mind to know that I have off-site storage, in case of fire or theft, or similar disasters at the house. Plus, there is the slim chance of ransomware wiping out everything locally.  And, again, I don’t have to think about it.  I don’t have to discipline myself to rotate storage to have off-site safety.  For practical purposes, it’s built into my computers, now.

My solution is maybe a little pricey. I spent about $300 initially, for the D-Link DNS-321 and a 1 TB drive. The more recent Synology DS213j with a 4 TB drive can be had for about $300, at today’s prices. And the yearly cost for cloud backup is $149.

The NAS is a one-time expense, lasting me for years.  Crashplan is an ongoing expense.  As always seems to be the case, it was all a little more than I’d prefer to spend.  But, given the bulk, I think it’s reasonable.

[What do you do for a backup system? It is extensive, or do you even have a backup system? Let us know in the comments. Or you can submit your own experiences with backup processes on your home computers; see our Submissions page for details. – Editor]

A Raspberry Pi Media Server

clip_image001If you haven’t heard, the Raspberry Pi is a card-deck size small computer, great for tinkering around with for many projects. There is tons of information and projects that are available about the Pi; just ask your local search engine. (This article is not meant to be a full technical review of the Pi, there are many other sites that have done that. This page from the Raspberry Pi Foundation has the Raspberry Pi specs and capabilities; lots of other info on that site to understand the power and capabilities of the Pi.)

I was intrigued with the concept of using the Pi as a little computer, thinking back to the days when I would build my own PC. My first personal computer was the original IBM PC model 5150. It had 16K of memory, the 8088 processor, and the operating system was Cassette Basic. (Yes, I am that old.)

I spent about $5K on the whole setup, including an RGB monitor and some accounting software for my wife to start an accounting business with (that was the excuse for the computer). I added 256K more memory with an expansion card, added a couple of 360K floppy drives, and later a 10MB hard disk. It was very powerful.

Then. It was very powerful back then.

More Power!

The Raspberry Pi is more powerful than my first computer. And it costs $35 for the computer itself. I decided that I wanted to get one and try to build a media server for my large collection of DVDs.

clip_image002You do have to spend a bit more than $35 to make it a usable system. I found a kit from Canakit on Amazon that included the Pi Model 2 (Quad-Core 900 MHz 1GB RAM), a 2.5 amp power supply, Wi-Fi dongle (Ralink RT5370 chipset), the 8GB MicroSD memory card (which included the NOOBS operating system), a heat sink, and a case. All of that for around $70. I got mine from Amazon here, there are several kits available; you can get only the parts you need for your project. (That link has all the details and pictures of the parts in the kit.)

I then got a 1TB USB hard drive ($70, such as this one), along with an powered USB hub ($18 here), since the USB port on the Pi doesn’t have quite enough power to run the hard drive. So my total expenditure was around $160. A bit more than the $35 cost for the Pi itself, but much less than my original computer; and more powerful.

SWMBO allowed the purchase, because her hobby (scrapbooking) results in almost daily delivery of supplies for that. So my expenditure, in the grand scheme of ‘things we do around here as a hobby’, was acceptable (she also is the CFO here).

All the pieces arrived in a couple of days.

A Great Tutorial

In the meantime, I poked around the Interwebs for a good tutorial on how to set up the Raspberry as a media server. And I found an excellent tutorial from a guy named Mel Grubb ).

Back in the old days (pre-Windows, DOS 1.0 days), I got pretty good at doing computer things with the command line. The Raspberry Pi OS is “Raspbian” (aka ‘”NOOBS”), which is a distribution of Debian. That’s Linux stuff, which I have played around with some over the years, but never on a daily basis. But I knew some of the concepts, so it wasn’t totally unfamiliar.

Using Mel Grubb’s excellent tutorials, I was able to enter the necessary incantations to install and configure all of the software needed to set up the MiniDLNA media server software. I won’t repeat all of that here, but Mr. Grubb’s tutorials are the place to go for a very easy and clear introduction to getting things going. In addition to the MiniDLNA instructions, he tells you how to set up a NAS (Network Attached Storage), BitTorrent, a VPN, and more. All very readable, with clear instructions.

The result was a full configuration of the Pi as a Media Server, with remote access to the Pi (via Secure Shell – SSH and control via the Webmin interface) so it doesn’t really need a keyboard or monitor for things to work. There were a few false steps along the way, so I needed a bit more googles (plus some questions answered by Mr. Grubb)  to figure things out until I got things working as I wished.

Ripping and Copying

Now that the Pi Media Server was configured, and visible on my home network, it was time to figure out how to ‘rip’ the DVDs into media files that could be stored on the USB hard drive.

After a few false starts, I settled on the WinxDVD software at about $35. Installing the software is the usual install wizard process. Operation is just a basic three-step process; insert the DVD, click a few buttons, and the software rips the DVD into your desired format (I chose MP4). The ripping process takes about 45 minutes per DVD, depending on the DVD length, and the hardware capabilities of your computer. I ran it on my HP Pavillion DV7 laptop under Windows 7, all was well after the Windows 10 upgrade. The WinxDVD software runs nicely in the background, so I could use my laptop for other purposes while the ripping was being done.

The Pi is on my local network via it’s wired and wireless Ethernet connections, although video playback via wireless is just fine without any ‘stuttering pixels’.  I could transfer the files from my laptop’s wireless connection via the LAN. I found it a bit faster to shut down the Pi and connect the USB hard drive to my laptop and transfer the media files that way. Once the USB hard drive is connected back to the Pi Media Server, and the Pi is restarted, all of the DVDs I have ripped are available on my networked media Server.

Viewing the Movies

We have a Roku box here connected to the main TV (well, there’s another one for another TV). The Roku is connected to our LAN via a Wi-Fi connection. It comes with an application that will connect to the Pi media server. So all access to the movies on the Pi media server can be done with the Roku remote. The quality of the movies is just as good as the DVD player. (The WinxDVD software will not handle BluRay DVDs, but they have a BluRay version available.) We’re not that picky on movie quality, DVD 1080i is just fine. And we have some old DVDs that are only 720p resolution, but those play just fine through the Pi Media Server. I also copied some digitized home movies files, and they viewed fine. (Now I have even more things to embarrass my children when they visit.)

There are ways to convert your old video tapes to MP4 files; a quick search came up with this article; the process would require a RCA to USB cable and some software, plus your VCR. I haven’t done that, but it would be a way to get more old home movies onto the media server. There are also services that will do that conversion for you.

As I was reviewing this article with the Chaos Manor Advisors, Eric Pobirs chimed in with this observation on VHS conversion:

    I did that quite a lot for my sister when she had a  VHS-C camcorder. The product we used was the Pinnacle Dazzle, which came with their Studio software. (They’ve since been acquired by Avid, so they have heavy duty connections.) That product is now a lot less, listing at $70. It should work with anything that outputs to composite or S-Video.

http://www.pinnaclesys.com/publicsite/us/products/dazzle/dvd-recorder-hd/

There are a lot of much cheaper products out there. A few months ago I got an SIIG branded device that Frys was selling for under $10 on a promo. Just in case I ever needed such a thing again as the Dazzle went missing at some point after my sister switch to a digital video camera.

The Pi MiniDLNA Media Server will also handle pictures and music, so you can access those via your LAN, just copy the files to the appropriate folder on the USB hard drive. The 1TB drive will hold a lot of movies and music. The MiniDLNA software on the Pi will handle multiple hard drives if needed.

Wrapping Up

clip_image003The whole project was fun to do. So much so, that I made two more (one for each daughter and their family). I got a simple wooden case from the local craft store, and a can of brown spray paint. I cut a notch in the back of the wooden box for cables, and used some Velcro squares to mount the three pieces in the box. The picture with this article shows the finished project. (It is convenient to have a little surge protector to power up everything, since the Pi doesn’t have a power switch, just an external power supply.)

The Raspberry Pi is an interesting platform for anyone to try out. The Pi’s Raspbian NOOBS OS comes with MineCraft, plus the Python programming language, and a simple programming language for kids to try out.

There are a ton of projects out there for the Pi;. I’ve seen home security, robotics, motion-sensing cameras that send texts alerts, flashing lights, wireless phones, mini-tablets, and more – any web search will find ideas for your own projects. Home schoolers will be able to find projects that will help any kid ‘get geeky’. If you can think of a project for it, chances are that someone has already done one that you can try out.

Since the Pi’s operating systems is on the memory card, you can swap it out for your different projects. The ‘HAT’ interface (Hardware Attached Things) can be used to control just about anything. The possibilities are many and varied.

And it’s much cheaper than my original $$$ computer.

What do you think? Have you made something with the Raspberry Pi? Use the comments below to add your thoughts. Or write up your own story and submit it to Chaos Manor Reviews for publication consideration – details are here.

XBox as a DVR

Eric Pobirs passed this information along to the Chaos Manor Advisors, and we thought you might be interested. – Editor

Microsoft announced today during their Gamescom presentation their intent to offer DVR capability on the Xbox One.

http://www.engadget.com/2015/08/04/xbox-one-dvr/

500px-Xbox_logo_2012_cropped.svg If this would let me retire my TiVo and its monthly service fee I’d apply the savings to a very high capacity external drive. I currently have a 3 TB drive set aside for the purpose but the additional load from storing video would merit some added TBs. Initially this will be for the USB OTA antennas offered for the Xbox One in Europe and the US. (Which are completely useless in my location due to the amount of geology between here and Mt. Wilson.

Unless Microsoft is planning an external CableCard box, I’d expect that this would use a cable/satellite box connected to the HDMI In and encode the stream to a saved file. (I know the AMD designed APU has some encoding hardware functionality but I’ve seen far less info than on the Intel QuickSync tech.) This may mean a somewhat lesser quality than on a modern DVR that records the compressed stream straight from the cable. Still, hard to beat for free if you already own an Xbox One.

The Amazon Prime service is on-demand, 24/7/365. You can watch anything you want when you want. Because this is a rapidly growing method for accessing video entertainment, scheduled broadcasting is starting to feel some pain. I believe that most of it will be gone within 20 years, possibly much sooner.  Thus the DVR itself is product category with a shrinking future. TiVo is working to reposition itself as the center of video access for the home, which puts them up against game consoles in a way they hadn’t needed to deal with previously.

One example is the Season Pass feature. It will now enlist any streaming service you subscribe to, and have told the TiVo how to login, in order to gather up all the episodes requested sooner than later. It a nice innovation if you have the subscription but really only applicable if you’re watching a current season. A completed season on the streaming service would simply be accessed entirely from there.

So TiVo is in an existential struggle because console makers will always have games as the primary function but the hardware has enough spare horsepower that encoding video isn’t the major task it once was. Microsoft, and potentially Sony, can just add this as a software feature rather than rely on it as their core product.

What was announce for release next year is solely for use with an OTA antenna but it has been hinted by MS execs that it won’t end there.

There are alternatives available right now.

http://cetoncorp.com/products/infinitv-6-ethernet/

This page is somewhat dated since Windows Media Center is dead as far as ongoing development goes. (It is not supported in Windows 10 at all.) Storing recordings on a shared volume that is presented by a DLNA works fine for networked DVR. A CableCard from Time Warner to decode their feed is $2.50 a month on my bill.

https://www.silicondust.com/ is another player in this area.

Gadgetry abounds but I think you’ll find you could spend vast amounts of time just exploring the library Amazon Prime gives you, with the added virtue of it being already paid for as part of your Prime subscription.

What do you think? Let us know in the comments. And share your stories with us; check out the details here.

Weather Stations and USB Servers

Your intrepid CMR editor bought a weather station last year that can update to the Weather Underground network. But to do that, he needed to connect the  weather console to the PC. He wanted the console to be downstairs – and the desktop is upstairs. He discovered a USB server that connects USB devices over his LAN that saved quite a few ‘sawbucks’.

500px-CloudShowerSun.svgA trip last year to Costco netted me a Acurite Professional Weather Station (model 0232C, which lists for $169.99, available on Amazon  for $113); at Costco, the price was under $85. I have always been interested in getting a weather station, but most cost even more than $170. So when I saw the one at the local Costco at that price, I decided to get it. (It may not be available now, but the story is still interesting.)

Setup was fairly easy, although at additional cost. “Siting” a weather station is important to get accurate readings, but I had a good spot in the corner of my yard – no fences, lots of open space around it. In my case, there was a solid tree stump at that corner, so a trip to the local Home Depot netted me a 1” galvanized flange, a 1” to ¾” adapter, and a five foot ¾” galvanized pipe, plus four 2 ½” lag bolts, and a can of beige spray paint (to help the post blend into the neighborhood, since it is visible from the street). The weather station is mounted on the ¾” pipe, although the kit did include a mounting bracket to connect to a wooden post. All of that got the weather station installed in a great spot, and so far, the neighbors haven’t complained about the placement (there is an HOA involved, so don’t tell them).

The battery operated (with a solar panel to drive an internal fan) outdoor unit communicates with the indoor color display wirelessly. The display has a USB port to connect to a computer, and then you can use the included software (although I found some better software than came with the kit) to send your data to a place like Weather Underground. You can also install an app on your Android or Apple device to display your data. But you do need to connect the display to your computer via the included USB cable.

And that was my problem that makes all of this related to a “doing things so you don’t have to” story.

In my house, the living room is the desired location for the color weather display. But the desktop computer is in the office upstairs. That computer doesn’t get used much, since my wife and I both have laptops that we use to connect to our home wireless network (LAN). But the desktop is connected to the laser printer (via a network printer device) so we can use that printer. The desktop is also used as the backup for our laptop files, which are backed up to the ‘cloud’ with Carbonite. (We use the Microsoft SyncToy to sync files from laptops to the desktop.) And the desktop is connected to our little LAN via an older Netgear Wireless Extender. It all works together quite nicely, in a geeky sort of way.

In order to get the weather data to Weather Underground (WU), the weather station comes with an app that is loaded on a computer. The weather display is then connected to the computer via the USB cable, and then the desktop app sends the data up to WU. So my layout was not going to work, unless I used a 50 foot USB cable draped around the walls. Not a good home decorating idea.

Acurite had a solution, though, called a Network Bridge, at $80 retail (plus shipping). A bit steep, I thought.

A search (with a bit of help from fellow CMR advisor Eric Pobirs) eventually got me to the Monoprice company web site, which had a USB server that allowed USB devices to connect to a computer via IP (). The list price at Monoprice is $24, but who pays list? A search on Amazon didn’t find a cheaper price. The listing there was about $31, but there were a couple of used ones for under $10 (at the time, current pricing for used ones now is around $27) including ground shipping. The device specs and features looked good, and losing a ‘sawbuck’ (for you youngsters, see here ) was OK if it didn’t work, so I ordered the used one.

It arrived a few days later in a padded package: the device, the power supply, and the mini-CD with the device application). It was a simple installation process: connect the weather station USB cord to the USB server, plug the server into the router (which is in the same downstairs location as the weather display), and connect the AC adapter.

Then upstairs to the desktop computer to install the device software from a mini-CD on the desktop. A few clicks, and a search via the app to find the USB server on the network, and the weather display was connected. (The device used DHCP to get its IP address, although you can assign an IP address with the app.)

I switched to the Acurite weather application, and it saw the weather display that was connected over my LAN via the little USB server. It all worked.

The USB server device can support a USB hub, so you could connect more than one USB device to your local network. You can also use it to connect a non-network type printer to your local network. If you have multiple computers, the USB server software will allow each computer to connect to any USB device, although not at the same time. The software does have a way to send a message from one computer to another to release control of the USB device.

The USB server might be most useful for sharing non-network-enabled printers at a reasonable price, although wireless-enabled printers are not that expensive. And you can find little USB wireless print servers that will allows you to connect a printer to your wireless network.

In my case, I saved over $60 using the USB server. My weather display is now connected to the desktop computer through my local network, which allows me to share my weather station on Weather Underground.

Do you have a story about computers that you’d like to share? See this page for details. – Editor

Desktop and Laptop Upgrade

CMR Reader Karen Parker decided she wanted to replace an older Windows 7 desktop with some newer hardware. Rather than purchasing a new system, she decided to ‘build her own’, and then transfer her OS and applications and data to the new system.

And then, apparently on a ‘roll’, she decided to upgrade her Windows 8 laptop with a new internal SSD drive, moving its current drive to an external box.

So in the CMR grand tradition of “I did so you don’t have to” (or “you may want to, too”), we share her experience. – Editor

For many years I’ve followed the theory that when you get a new machine you get one that is one step back from the bleeding edge (I may well have learned this from you.) This has the advantage that the machine is good for a long time, and you don’t spend a lot of money upgrading every other year, and the disadvantage that you frequently end up waiting a while before getting new technology. This last point is the hook that caught me with my desktop upgrade.

The old desktop machine was about seven years old, based on a Core i7-920 processor. My theory was that I’d get a new processor, motherboard, and ram and just install them in my existing case, reusing the video board, disc drives, etc. I decided on a new Core i7-5820 six-core processor, and off to my local Micro-Center I went. They pointed me to an Asrock motherboard, suitable memory, and oh, by the way, you need a cpu cooler since the chip doesn’t come with one in the box. I took it all home and hit the first snag — the cooler was too tall to fit in the case. Back to micro-center the next day for a new case and power supply.

After a day or so of studying all the instructions (it’s been several years since I assembled a computer after all) I gritted my teeth and put it all together. So far, so good.

My plan was to boot the machine into safe mode, install the new motherboard drivers, reboot, and go. So much for plans . . .

Then machine would start to boot and then blue screen. After a bit of research on the web, and a bit of thought I realized what the problem was. The old machine only implemented one way of talking to SATA drives, the IDE protocol. AHCI hadn’t been invented when I built that machine, but in the mean time it has become the default. I powered up the machine, went into the BIOS, and sure enough, the SATA ports were set to AHCI. I reset them to IDE, booted the machine and it came right up. I then installed the motherboard drivers, rebooted again, and the machine was alive and fully functional.

The only fly in the ointment was that it’s still using IDE protocol for the drives, hardly optimal, especially for my SSD boot drive.

Another web search revealed the necessary incantation to force Windows to detect AHCI and install the necessary drivers. You need to run regedit to go into the system registry and navigate to HKEY_LOCAL_MACHINE/SYSTEM/CurrentControlset/Services/msahci. Select the “start” value, right-click it and select “Modify” and then set its value to “0”.

Exit regedit, reboot, go into the bios, change the SATA ports from IDE to AHCI, and then reboot again. Windows will load the drivers, and then reboot once more, and Bob’s Your Uncle. I did have to re-authenticate my copy of Windows, but that was done automatically over the internet, and took only a moment.

In the end, I managed to put a whole new machine under my existing copy of Windows, without a reinstall, always a painful proposition that takes at least a week of reinstalling and reconfiguring software.

As an aside of sorts, I have to comment on the design of the new case. This is the first case I’ve used that provides for routing the cables behind the motherboard, and it was an absolute joy to work with. A far cry from the days when you had PATA ribbon cables sprouting out of the middle of the motherboard, and running willy-nilly through the middle of the case. Tool free drive caddies are very nice too.

Upgrading the Laptop

Apparently the planets converged this summer because my laptop came due for an upgrade too. The machine I selected is a fairly ordinary high end lap top, with a Core i7 processor and 17 inch display. It was equipped with a 1 TB 5400 rpm disc drive, so I decided to buy a 500 GB SSD and external enclosure, planning to put the SSD into the machine and the disc drive in the external enclosure.

The SSD came with a license for Acronis True Image cloning software, so I downloaded it and cloned the hard drive to the SSD. I thought that there might be problems cloning a larger drive into a smaller one, but the software seemed to be ok with it, and besides, that big drive was mostly empty anyway. The clone worked just fine, so I went ahead and swapped the disc drive and the SSD, easy enough but requiring the removal and replacement of 16 or so tiny screws. Then I booted the machine, and all hell broke loose.

The machine would boot, but was very balky about doing anything. Any sort of disc operation, in particular, seemed to take half of forever, if it would complete at all. I spent a couple of days trying to fix the machine, to no avail.

So, undo those 16 tiny screws, swap the drives again, and verify that the machine had regained its sanity. I also plugged the SSD, in the external case, into my desktop machine, which promptly complained that the drive needed repair. I let it run chkdsk on the SSD, and after that it could read the SSD just fine. This was the hint I needed.

I formatted the SSD, cloned the disc onto it again, but this time ran chkdsk on it. Chkdsk found and fixed a bunch of errors, when it was done I verified that the SSD contained everything it should, and copied several files back and forth between the disc and the SSD. All seemed to be well, so I undid those 16 tiny screws yet again and swapped the drives.

This time it all worked, and I now have a laptop that boots lightening quick and runs great, along with one TB of external storage. Bottom line, cloning software works well, but it’s probably a good idea to run chkdsk on the cloned drive before you try to use it.

Finally, a question for [Chaos Manor] advisors. My desktop machines is currently running an Nvidia 630 graphics card, certainly not the most powerful card out that. The rest of the machine is pretty high powered, Core i7-5820 processor, 8 GB of RAM, SSD boot drive, 2 TB of other storage, and two monitors. The primary use is processing photos and writing and laying out books, so basically it needs to run MS Office and Adobe Creative Suite, but no games. Do you (collectively) have any suggestions as to what might be a better video card, or even if I need something better?

The Chaos Manor Advisors responded with some thoughts

From Peter Glazkowsky

The one good consequence of the fact that PCs aren’t  improving as rapidly as they used to is that older systems are still useful. Pretty much any desktop from the last six or eight years would be fine for reading and writing things.

From Eric Pobirs

    If gaming is not an issue, the main thing to look for in a video card upgrade is the improvement for Adobe Creative Suite and other software that in recent versions can enlist the GPU for additional processing power and not just display tasks. A lot of the stuff that once drove the purchase of dedicated accelerator boards is now well within the functionality of a modern GPU.

From Alex Pournelle

And the built-in graphics are getting better all the time. Intel’s next generation of graphics claim 30-45% improvement.

From Peter Glaskowsky

Since the Core i7-920 processor was introduced less than seven years ago, she might have made an exception to the “wait a year” rule, or maybe the machine isn’t as old as she thinks. Might as well clear that up before publishing, if possible, so as to avoid confused readers and unnecessary feedback.

As for her graphics card question, her applications don’t require anything better. A significant upgrade would cost maybe $80-$100, but it’s possible she wouldn’t even notice the difference.

It might be worth checking with Adobe to see if her version of Creative Suite supports GPU acceleration, and if so, what graphics cards can be used. It may be that a workstation-class card will deliver a significant performance enhancement in those applications.

Many workstation cards are available for under $100, but for the same price as a consumer card, a workstation card will usually come with an older, slower GPU. So it would take some work to figure out how much it would cost to deliver a meaningful speed boost.

[If you readers have thoughts, use the comments below. And if you would like to share your experience, we’re looking for more stories. – Editor]