Friday hike blogging: Sterling Pass trail

by Robin Harris on Friday, 24 October, 2014

After the Slide Rock fire all the trails in Oak Creek Canyon were closed for about 4 months. They reopened the trails October 1, so I’ve been hiking some of them to see what the fuss was about.

The Sterling Pass to Vultee Arch hike is about about 1200 steep feet up, 800 steep feet down, and then reverse. Maximum altitude just over 6,000 ft. Supposed to be about 5 miles, but it felt longer.

Here’s looking east from the pass.

Sterling Pass Vultee Arch

Click to enlarge.

Vultee Arch is named in honor of Jerry Vultee, aviation pioneer, and is near where he and his wife Sylvia died in a plane crash in 1938. Mountain flying can be treacherous.


Shadow IT industry pt. III: what’s next?

by Robin Harris on Friday, 24 October, 2014

What’s next for the shadow IT industry? It should be obvious: after blowing up the storage and server business models, what’s left?

Amazon has been working on their own networking software and hardware for several years. While networks aren’t a large part of their cost structure – servers are – it was rapidly growing because network gear wasn’t following Moore’s Law.

AWS can’t predict customer workloads – a low-data web server might be replaced by a data warehouse – so they have to provision for the maximum workload. As low-data apps might need 1% of the bandwidth of high-data apps, that’s an expensive proposition.
AWS cloud logo
They were forced to address the problem.

No doubt Google, Facebook and Azure are seeing the same issue. Then the question is: what is the likely impact on the network market?

The StorageMojo take
The data center network vendors are caught in a classic Innovator’s Dilemma trap. Cutting their prices to meet web-scale competition wouldn’t win them much business but would cost them lots of revenue and margin.

Software-defined networking is obviously driven by web-scale competition and technology. The incumbents want SDN to succeed – to avoid a worse outcome – but only on their terms.

That’s the same problem storage incumbents have with scale-out clusters: embrace, but not too tightly; promote, but not too aggressively.

But the cost differential between on-premise proprietary kit and cloud is too big to ignore – and it’s growing. Worse, the differential will continue to grow, while the capabilities improve.

Historically few companies have survived a shift from 60-70% gross margins to 30%. There are just so many practices that lower margins can’t support baked into company operations.

One thing is certain: datecenter network bandwidth is a commodity and needs to be priced like one. The opportunity is in the upper layers of the stack.

Courteous comments welcome, of course.


Friday hike blogging: West fork, new and improved!

by Robin Harris on Friday, 17 October, 2014

Finding that the recent fire and monsoon rains had altered the west fork of Oak Creek made me want to go back further up the creek.

Without a paddle.

So last Sunday I did.

Click to enlarge.

Click to enlarge.

That’s Steve in the corner. Manou is in the crowd up ahead.

Spectacular! In case you couldn’t tell. . . .


Shadow IT pt. 2

by Robin Harris on Friday, 17 October, 2014

The first post on shadow IT looked at R&D spend. Now we look at CapEx spend – specifically PP&E – property, plant and equipment. That’s where new datacenters, servers, storage and networks go.

Big Spend
The FY13 PP&E spend in billions from major players:


While R&D expense is a proxy for innovation velocity, PP&E is a subtler metric. To the extent that the major cloud providers buy large quantities of kit, they influence the commodity market more than traditional system vendors.

Why? Because vendors are no longer on the cutting edge of demand or technology. Suppliers focus on where the growth is – and it isn’t in the legacy vendors. The opportunity to sell 500,000 units concentrates the mind wonderfully.

Big (ops) data
Another difference is that cloud vendors have deep insight into their operations that is not available to legacy vendors or analysts. They don’t need to explain the “why” to vendors, just the “what”. That means that even their suppliers have little insight into the opportunities their customers are exploiting.

We see this influence in several developments over the last 8 years.

  • Rapid adoption of high efficiency power supplies
  • Continued investment in high-capacity optical disks despite little consumer uptake
  • Advent of shingled magnetic recording drives (SMR) without public announcement or availability
  • DC power distribution systems

There are probably dozens of less obvious changes due to the cloud vendor’s buying clout, such as the many-core chips, many-drive (≈40 or more) servers, and containerized data centers. Since these changes aren’t responding to enterprise needs, most enterprise people don’t try to understand them.

The StorageMojo take
As Arthur Conan Doyle put it in his Sherlock Holmes story Silver Blaze:
Gregory (Scotland Yard detective): “Is there any other point to which you would wish to draw my attention?”
Holmes: “To the curious incident of the dog in the night-time.”
Gregory: “The dog did nothing in the night-time.”
Holmes: “That was the curious incident.”

It’s hard to see the action of unseen forces. Whether it’s dark matter or dark money, we are forced to look for the strange effect and then reason backwards to the cause. It’s not easy.

The most obvious long-term effect of the cloud vendor’s OpEx and CapEx primacy is legacy vendors will fall further behind in both invention and implementation. By extension, so will enterprises who outsource their infrastructure architecture to legacy providers pushing low-scale systems.

The shadow IT industry is changing IT faster than ever before. Now is the time to start following and planning for it.

Courteous comments welcome, of course.


The shadow IT industry

by Robin Harris on Tuesday, 14 October, 2014

The power of the big IaaS players – Amazon, Google, Facebook, Azure (AGFA) – constitutes a shadow IT industry. It is a shadow because its operations are outside the transparency we take for granted with legacy IT vendors like IBM, HP, Cisco and Oracle.

AGFA announces new services, but the tech behind the services is usually secret. We can see some of the effects – the inexplicable investments in high capacity optical or the emergence of SMR drives- but the configurations and economics remain opaque.

To get a sense of AGFA’s importance in IT, StorageMojo compared the R&D spend of AGFA and the legacy vendors. The numbers are roughly from 2013, but have a little slop because of different fiscal year start dates. All numbers in billions.

Shadow IT table
AGFA R&D spend is almost 28% higher than the legacy firms.

Update: A commenter on Twitter questioned whether Microsoft’s 10 billion should be on the list. As of 2011 a Microsoft exec said that 90%of their R&D spending was for cloud. Given their continuing cloud push, Nadella’s elevation and the competitive fireworks among Google, Amazon and Microsoft, I’d expect today’s number to be in that ballpark. After all, how much can they spend updating Windows or Office? End update.

The StorageMojo take
Trickle-down economics doesn’t work but trickle down technology does. The AGFAs build something great for massive scale and then a few years later the engineers strike out on their own. Nutanix is one example.

In the bad old days – 5 years ago – the legacy vendors controlled enterprise architecture because they were the only game in town. No more.

Advanced architectures and technologies are being developed for in-house use and de-bugged in massive-scale 24/7 environments that are way more demanding than any enterprise. The legacy vendors can’t afford to compete with that, even if they weren’t being out spent.

The downside of course, is that AGFA doesn’t care about enterprise problems. Their technology has to be adapted to the enterprise – or the enterprise to their technology.

Get used to it. This won’t change any time soon.

Courteous comments welcome, of course.


Scale and intelligence: lessons from warehouse-scale computing

by Robin Harris on Monday, 13 October, 2014

Why is enterprise infrastructure so costly and inflexible while warehouse-scale computing is cost-effective and flexible? Is it:
a) Enterprise infrastructure is too capital intensive?
b) Warehouse-scale people are smarter?
c) High-scale systems can’t be reduced to enterprise-scale cost effectively?

Lucky for you, this is an open-blog exam. Read on.

People, machines and scale
Greg Ferro wrote a provocative piece on his blog Ethereal Mind titled Human Infrastructure Poverty & Over-Capitalisation In The Enterprise. He argues that enterprises have been conned into spending too much on product and too little on people.

The purchase promise offered by existing IT vendors is that increasing Capex will reduce the cost of ownership through faster performance, better features, better software and usability. Yet the public cloud uses conventionally ‘over staffed’ IT teams creating automation and orchestration that reduces capital spend up to 70%. Modest investments in human infrastructure instead of physical or software has built an entire product category that is growing in excess of 40% per annum.

A fair critique. But why are enterprises unable to follow the IaaS industry’s lead?

Money, money and money
Data center budgets are 60-70% salaries, while Google-scale computing cost is 3-5% salaries. Yet Google, Amazon or Microsoft can afford a lot more PhDs than any enterprise because OpEx isn’t the crucial metric.

Why? Scale.

OpEx is the wrong measure for the big players. The real investment is in R&D budgets, not OpEx.

One data point: Google spent $7.95 billion on R&D in 2013; Goldman Sachs spent less than 1/10th that – $776 million – on communications and technology, most of which was OpEx, not R&D. How many IT shops even have an R&D budget?

Goldman’s IT folks are rumored to be among the best paid and brightest in the industry. But they don’t build their own infrastructure either.

The StorageMojo take
A number of companies make Google-style, scale-out infrastructure. Buy software only or integrated hardware and software.

But adoption is a problem. In the short run these are new systems that incur new system startup costs. But the savings are – mostly – in the long run.

CIOs have been playing the OpEx savings card for so long that CFOs don’t believe them. Given IT’s dismal record for new projects, who can blame them?

Mr. Ferro contends that adding intelligence to IT rather than trusting vendor R&D is a Good Thing. He’s no doubt correct.

But how will its ROI be validated? Where will this intelligence come from? Most vital: how will CFOs and CEOs be convinced?

Because of their scale Amazon, Google, Azure and Facebook can afford to put PhDs on problems that no enterprise would see a return for. The right answer for enterprises is to implement resilient scale-out architectures from committed vendors, rather than attempt to reinvent a warehouse-sized wheel. Focus scarce resources on evaluating best-of-breed solutions and the cultural changes required to best implement them.

Courteous comments welcome, of course. Where do you think the biggest payback from greater IT intelligence would come from?


Symantec to split – finally!

October 8, 2014

Multiple outlets are reporting that Symantec (SYMC), which bought storage software leader Veritas for $10.6 billion in 2005, is soon to break itself up into a security company and a storage software company. SYMC segments their business into User Productivity & Protection, Information Security, and Information Management. The Information Management segment includes backup and recovery, […]

2 comments Read the full article →

Friday hike blogging: the west fork of Oak Creek

October 5, 2014

The most popular local trail is the west fork of Oak Creek. It’s relatively flat and, thanks to the high canyon walls and riparian foliage, usually shady – a big plus in the summer months. The trail has been closed for the last 4 months due to the 21,000 acre Slide Rock fire. The Forest […]

0 comments Read the full article →

Reconstruction almost complete

October 4, 2014

Like reconstructing Leeloo from the Fifth Element, with no Milla Jovovich. Or a replacement drive in a RAID set. This has been a busy week, but not with research and blogging. Busy fixing the blog instead. WordPress 4.0 broke the StorageMojo theme. The themesters came out with an update with 4.0 support, but migrating to […]

0 comments Read the full article →

Friday hike blogging: Brins Mesa

September 27, 2014

I frequently hike the Brins Mesa-Soldiers Pass-Jordan Trail loop, but Wednesday Qing wanted to shake things up. So once we made the top of the mesa we turned north and hiked another mile and about 500 feet higher. That took us to the edge of the Mogollon Rim. On top of the Rim you’re on […]

0 comments Read the full article →

Macromolecular storage: the next frontier

September 26, 2014

Disk drives and flash are already pushing the limits of nanotechnology to increase density. But what if we went with encoding data directly into molecules? Does a petabyte per cc sound interesting? In Advances in Macromolecular Data Storage, Masud Mansuripur, a professor in the College of Optical Sciences at the University of Arizona, proposes a […]

0 comments Read the full article →

EMC’s “Federation” meme is so dead

September 24, 2014

With reports from the Wall Street Journal and the New York Times that EMC has been shopping itself to HP, Dell and perhaps Cisco and Oracle (pretty please!) it’s clear that the “EMC Federation” concept has cratered. Why did it take so long? While an activist investor – hedge fund Elliot Management – has pushed […]

4 comments Read the full article →

Friday hike blogging: Chicken Point

September 19, 2014

Got up at 6am last Sunday and headed off to the Broken Arrow trailhead. There are several possible loops but so far I’ve stuck with the longest one around Twin Buttes. It’s about 6.5 miles with over 1300 ft of total vertical, so it’s a decent workout. Lots of great views – I published one […]

0 comments Read the full article →

StorPool’s new distributed storage software

September 16, 2014

It was obvious in 2006 that Google’s clean-sheet GFS would revolutionize massive storage. The problem has been taking Google’s concepts and scaling them down to less than warehouse scale. A number of companies have tried – Nutanix is probably the latest – and there’s a new entrant. StorPool offers distributed block storage designed to be […]

2 comments Read the full article →

Friday hike blogging: Brins Mesa

September 12, 2014

With family visiting I only got out once this week: a 3 hour hike on the Brins Mesa, Soldiers Pass, Cibola and Jordan Trail’s loop. It’s a favorite: bracing vertical; much variety; not too many tourists (usually); and, of course, fabulous vistas. We’re just coming to the end of Arizona’s monsoon season, which has been […]

0 comments Read the full article →