Meeting young Mr. Trump

by Robin Harris on Thursday, 30 June, 2016

Back in 1980 I met Donald Trump. He came to a finance class to talk about real estate finance.

I have no recollection of his talk. But I DO remember the visit and, given what I’ve read about Mr. Trump, some readers may find my recollection an interesting footnote.

To set the scene, this was a graduate MBA course in finance, at a top business school – Wharton – with maybe a couple of dozen almost all male students in the class.

Trump was then about 12 years out of Wharton undergrad, and had notched a major success with the Grand Hyatt in mid-town Manhattan in 1976. Trump partnered with the Pritzker family on that project, and after a falling out, sold his half for $140 million in 1996.

What WAS memorable was that he brought a tall, slim, blond. He introduced her as his wife Ivana and a former Czech Olympic skier (evidently not true). Mrs. Trump spent the entire time with a deer in the headlights look, as if someone might ask her about finance.

No one did.

The StorageMojo take
Maybe the professor thought Mr. Trump would say something useful. Or he wanted a day off.

In retrospect though, the only reason to bring Ivana was as a prop. Given that Mr. Trump had actually pulled off a major success in the tough Manhattan real estate market, that was unnecessary.

Draw your own conclusions about what this says about the young Mr. Trump. Based on what I’ve seen of the old Mr. Trump, he would be a disaster for America and the world as President of the United States of America. There’s a reason we rarely elect business people as Presidents: politics requires totally different skills.

Courteous comments welcome, of course. Yes, this is off-topic for StorageMojo. Back to our regularly unscheduled programming soon.


Enterprise storage goes inside

by Robin Harris on Monday, 20 June, 2016

Some interesting numbers out of IDC by way of Chris Mellor of the Reg. First up: the entire enterprise storage market in the latest quarter:

Note that HPE is #1.

Then the numbers for the external enterprise storage market:

HPE is now #3 with $535.7 million.

The difference is internal storage
That means that HPE sold $884.6 million in internal enterprise storage. Dell is a similar story: $845.5M total; $376.2M external; $469.3M in internal storage. Hitachi and IBM both sell servers and enterprise storage, but their numbers are mostly external.

Among all vendors: $8211.3M total; $5423.6M external; $2787.7M internal.

HPE was the only vendor to show sales growth in both the total and external market. And most of there growth was in the internal market.

The StorageMojo take
Is the internalization of enterprise storage a trend? Yep.

Look at the growth of hyper-converged storage. The growth in servers that are, essentially, big boxes of drives. The growth in scale-out object storage that uses those big boxes of drives.

That puts a different complexion on the Dell/EMC combo. By itself EMC can’t compete for internal enterprise storage. By itself Dell can’t compete for the external enterprise storage market.

Together they can beat HPE. Which, five years ago, would have been laughably unambitious. Today, a worthy goal.

I was a guest at HPE’s Discover 2016 in Las Vegas a couple of weeks ago. One of their storage execs mentioned that internal storage was becoming a major part of their enterprise story. That struck me as odd until I thought about it – and saw the IDC numbers.

This is a natural outgrowth of treating storage components as devices that will fail, and asking the software to handle failures when they occur. That paradigm shift enables the use of commodity hardware – and internal storage. And more headwinds for the external storage vendors.

Courteous comments welcome, of course. I have a soft spot for HPE storage: DEC’s storage group and the StorageWorks brand were eventually absorbed into HP, and I was, among other things, the product manager for DEC’s first StorageWorks product.


Hike blogging: Hog Heaven trail

by Robin Harris on Sunday, 12 June, 2016

Mountain biking is very popular in the surrounding national forest. Enthusiasts have built many challenging bike trails, which I like to hike – not bike. I’ve never broken a bone and don’t intend to start now.

Hog Heaven is one of the toughest of the local trails, with a double black diamond rating. Yesterday morning I started on the Broken Arrow trail, turned off on to Twin Buttes, and then on to Hog Heaven. It was a nice, not-too-long loop of about 4 miles, with only about 300 feet of vertical. Here’s an iPhone shot from near the eastern end of the trail, looking north:

Click to enlarge.

Click to enlarge.

Once over the saddle – to the left of the picture – Hog Heaven gets challenging even on foot, with some steep and technical drops. Once you begin some sections I can’t imagine how you could stop without serious injury. I’d like to see an expert mountain biker take it on.

The StorageMojo take
Trails – whether human or animal – are a form of storage, collecting the output of a calculus of effort, reach, and desire. I admire the mountain bike community for creating some of the best trails in the region – and for surviving them.

If you come to town and want to try some of these trails, I recommend the Fat Tire bike shop. They rent Ibis bikes, that Dave, the owner, sets up for each rider. Dave is very smart, passionate about bikes, and brings deep knowledge of bike design and physics to his work. Yeah, he takes care of my bike, but this recommendation is unsolicited.

Courteous comments welcome, of course.


EMC perfumes the pig

by Robin Harris on Friday, 10 June, 2016

I feel sorry for EMC’s marketers: they have to make 10-20 year old technology seem au courant. It’s an uphill battle, but that’s why they get the big bucks.

The latest effort to perfume the pig – hold still, dammit! – is EMC Unity. In a piece that – and this is a sincere compliment – rivals the best writing of “wealth creation systems” websites and late night infomercials, EMC’s Chad Sakac, goes on and on about Unity, the VNX successor.

VNX is huge for EMC
Tucci is trying to keep the wheels on until the Dell acquisition closes. Sakac makes it clear that Unity is critical to the effort:

EMC VNX has an installed base that is well, well north of 100,000 deployed arrays. It has been the most successful EMC platform to date in terms of the number of people and customers, and VNX brings more net new customers to EMC than anything else we do – even as it became increasingly aged.

Bolding added.

Mr. Sakac’s piece makes other interesting points.

  • There’s a lot of new code in Unity, and as he rightly points out, it takes a long time to harden code, even with a staged introduction of several pieces. In my experience the hardest bugs are the interactions between modules, not within modules.
  • There’s no PCIe/NVMe interconnect support. That’s the future, not FC or SAS.
  • This is old-school scale-up storage: forklifts forever!

No mention of RAID HDD rebuild times, but assume days for the latest large capacity drives. Naive buyers won’t know what that means – which may explain who is buying these arrays.

The StorageMojo take
The bottom line here is that even the re-engineered VNX/Unity is aimed at customers who are still heavily invested in legacy technology – or who don’t know what that means. EMC’s – and every legacy array vendor’s problem – is that cheap flash IOPS has destroyed the value of years of array controller optimizations for hard drives.

I divide storage arrays into two groups: legacy, developed before 2000; and modern, developed since 2000. As Mr. Sakac makes clear, EMC remains heavily invested in – and dependent upon – legacy architectures.

External storage isn’t going away, but major vendors can no longer ignore the fact that the most important storage is often internal, where bandwidth is cheap and latency lower, such as in-memory databases. That’s why Tucci is selling EMC to Dell – and why Dell is paying way too much for rapidly depreciating assets.

Courteous comments welcome, of course.


Commoditizing public clouds

by Robin Harris on Wednesday, 8 June, 2016

I’m a guest of Hewlett-Packard Enterprise at Discover 2016 in Las Vegas, Nevada this week. I enjoy catching up with the only remaining full-line computer company. HP was a competitor in my DEC days, and since the Compaq purchase they incorporate the remains of DEC as well.

One of their themes this year is multi-cloud infrastructure. The multi-cloud is the Swiss Army knife of cloud implementation: private; public; managed; and, for good luck, incorporating those bits you can’t or don’t want to migrate to any cloud.

HPE says they have a wide array of software and services to enable multi-cloud implementations, whose enumeration will be left as an exercise for the reader. I’m more interested in this as a competitive response to the public cloud.

HPE’s cloud integration software supports the major cloud providers as well as a customer’s private cloud. Cloud brokerage is on the horizon, allowing customers to automagically get the lowest cost cloud service for a given workload.

The StorageMojo take
Today, of course, Google, Microsoft and AWS have very different strengths and capabilities. But to the extent that customers have common cloud needs – and to the extent that cloud providers care to respond to competition – they will tend to converge over time.

That convergence is another name for commoditization. And if HPE and others encourage their customers to play cloud providers off against each other based on price, that will shift the market’s center of gravity.

This isn’t a 24 month shift, more like 10 years. Yet as we look at future arc of public cloud adoption, we start to see how current vendors can effectively respond.

Short answer: public clouds are not the inevitable winners over enterprise data centers. The battle will continue to evolve, for the benefit of all of us, if not all vendors.

Courteous comments welcome, of course.


Thunderbolt: a fast and cheap SAN

by Robin Harris on Thursday, 2 June, 2016

If memory serves – and mine often doesn’t – I asked a panel at the NVM Workshop at UCSD their opinion on using Thunderbolt as a cheap, fast, and flexible interconnect. After all, I thought, academics always need more than they can afford, so these guys would have been looking into it.

Nope! They laughed at the very thought, then confirmed their ignorance by assuming that Thunderbolt was limited to Macs. Even very smart people make mistakes!

So I was pleased to see Thunderbolt used as a cluster interconnect by a couple of companies at NAB 2016. The furthest along – though not yet shipping – was Symply.

Started by Alex Grossman – a former Apple storage guy – with assists from Promise and Quantum, Symply is aiming at the media and entertainment space with Thunderbolt-based storage for workgroups and individuals.

Why Thunderbolt?
Let me count the reasons.

  • Fast. Thunderbolt 3 is 40Gb/s, making it one of the fastest shipping interconnects on the market. Low latency too.
  • Free. It’s built into all Macs and a rapidly growing number of motherboards from the likes of Lenovo and HPE.
  • Robust. I’ve been using Thunderbolt 1 for four years and it is very solid.
  • Support. Intel is pushing it hard and Microsoft supports it in Windows Server. Linux supports it as well.
  • Faster. When Intel announced Thunderbolt their roadmap went up to 100Gb/s. That’s still their plan.
  • Flexible. I think of Thunderbolt as a layer 1 and 2 pipe that enables all kinds of other protocols such as IP, PCIe, USB 3, DisplayPort and more.

But will it switch?
While I came to love Thunderbolt after I started using it, it’s limited to eight nodes. That’s not much of a cluster.

So I was curious about how Symply broke that limit to build a cluster large enough to handle eight editing stations plus storage. Very simple!

They use PCIe as the protocol and PCIe switches. Thunderbolt 3 provides the fast physical pipe, essentially replacing the PCIe physical layer. Since PCIe sends data in packets, architecturally it can run over multiple physical interconnects.

Quantum’s StorNext cluster file system, which is built into Mac OS and available for many other systems, provides the file access and locking mechanisms that clusters require. Potentially, then, Symply – or poor academics – could build much larger clusters using Thunderbolt.

The StorageMojo take
When DSSD’s plan to use PCIe first appeared, I was dubious since there weren’t any large scale PCIe switches available. But that’s changed in the last two years, with IDT, Avago and Broadcom all offering cheap 24 port switch chips.

I recall hearing that originally PCIe was intended as a server interconnect, not a local bus, but Intel took it to where the demand was as PCI ran out of steam. It seems that using Thunderbolt, PCIe is about to deliver on its early promise.

Courteous comments welcome, of course.

{ 1 comment }

Hike blogging: Memorial Day 2016

May 31, 2016

Yesterday I took a seven mile out-and-back hike along the Chuckwagon Trail. This is an area I want to explore more. It didn’t look like a good day for pictures, so I didn’t take the Canon EOS-M. But the smoke from a couple of burns on the Rim cleared and some nice clouds appeared, so […]

0 comments Read the full article →

The array IP implosion

May 23, 2016

We’ve seen this movie before The value of legacy array intellectual property is collapsing. This isn’t complicated: SSDs have made IOPS – what hard drive arrays were optimizing for the last 25 years – easy and cheap. Think of all the hard-won – well, engineered – optimizations that enabled HDD-based arrays to dominate the storage […]

2 comments Read the full article →

WD is not a disk drive company – and not a moment too soon

May 20, 2016

While you weren’t looking Western Digital stopped being a hard drive company, morphing into a storage company. Such transitions are nothing new for a company that started life making calculator chips in the 1970s, morphed into SCSI, ATA and graphics in the 80s, and built its disk drive business in the 90s and 00s. The […]

2 comments Read the full article →

Scale and the all-flash datacenter

May 9, 2016

There’s a gathering vendor storm pushing the all-flash datacenter as a solution to datacenter ills, such as high personnel costs and performance bottlenecks. There’s some truth to this, but its application is counter-intuitive. Most of the time, storage innovations benefit the largest and – for vendors – most lucrative datacenters. OK, that’s not counter-intuitive. But […]

6 comments Read the full article →

Why storage is getting simpler

May 2, 2016

Goodbye, old bottleneck StorageMojo has often asked buyers to focus on latency rather than IOPS thanks to SSDs making IOPS cheap and plentiful. This naturally leads to a focus on I/O stack latency, which multiple vendors are attacking. But what are the implications of cheap IOPS for enterprise data center operations? That’s what’s motivating the […]

4 comments Read the full article →

Storage surprises at NAB 2016

April 22, 2016

I did NAB a little differently this year: attended on Wednesday and Thursday, the last two days of the floor exhibits. Definitely easier, although many of the execs left Wednesday. But that wasn’t a surprise. Here’s what did surprise me: EMC seemed to have less of presence than in past years. I expected more. HGST […]

1 comment Read the full article →

NABster 2016

April 18, 2016

Tomorrow the top StorageMojo superforecasting analysts are saddling up for the long ride to the glittering runways of Las Vegas. The target: NAB 2016. As much as I like CES, NAB is my favorite mighty tradeshow. It is toy show for people with very large budgets – and we all know who gets the best […]

2 comments Read the full article →


April 18, 2016

I see a forecast in your future A few months ago I wrote about the best single metric for measuring marketing. That metric: It’s the forecast, when compared to actuals. If the forecast is accurate to ±3%, you’ve got great marketing. If ±10% you’ve got good marketing. So I was happy to see a book […]

0 comments Read the full article →

Smart storage for big data

April 15, 2016

IBM researchers are proposing – and demoing – an intelligent storage system that works something like your brain. It’s based on the idea that it’s easier to remember important, like a sunset over the Grand Canyon, than the last time you waited for a traffic light. We’re facing a data onslaught like we’ve never seen […]

1 comment Read the full article →