Dell vets: buff up your resumes this weekend

by Robin Harris on Friday, 9 September, 2016

Now that Dell has completed the EMC acquisition, you are in for a rude awakening. While Dell may own EMC, EMC owns you.

Richard Egan, one of the founders os EMC, fostered an exceptionally aggressive sales culture. The company liked to hire guys from blue collar families who’d played football in college, and then set them in a competitive culture where, if successful, they could make $500,000 a year.

All company travel was done on personal time, not business hours. Reps weren’t allowed to get too comfortable with their territories and accounts: after a year or two of success, your budget would be upped and/or you’d get some new accounts. And miss your number for a couple of quarters? You’d be MIA.

The pressure was intense. When I was at Sun, a customer told us that he’d had to call security to remove an EMC rep who was screaming at him for buying Sun instead of the EMC kit she’d been counting on closing.

A DELLicate transition
As EMC has acquired other companies with different cultures, and as cost pressures have grown, EMC CEO Joe Tucci tamped down the macho go-for-the-throat culture of Egan’s EMC. But make no mistake, EMC is still an aggressive sales machine.

Forget the who bought who details. It’s not uncommon for the acquired company personnel to elbow out the acquiring company’s incumbents. After all, if the acquiring company had the skills they wanted in-house, why go outside?

Dell’s storage initiatives, while well-intentioned, suffered from two problems, one normal and one not. The normal problem is when a large company acquires a small one, the small company gets engulfed in meeting a thousand new requirements and process hoops, while at the same time trying to get the large company sales force to start aggressively flogging their gear.

The abnormal problem: Mr. Dell never knew what he didn’t know about storage, so he never put the emphasis on changing Dell’s culture to make it happen. Since storage sales are harder than server sales, the Dell sales force has never been interested, while Dell’s storage marketing team didn’t have Mr. Dell’s support to overcome sales inertia. For server sales teams, the forecast calls for pain.

But the carnage won’t stop there
EMC has a deep bench, both in sales and operations, as well as way more technology than Dell has ever seen. When there’s an internal hire to be made, the EMC candidate is likely to have more relevant experience.

That’s not all. Since Dell has wildly overpaid for EMC, and the global economy is weak, the way to higher profits is through cost cuts. Headcount will get a serious cut over the next 24 months.

The StorageMojo take
I’d be a little more optimistic if Mr. Dell had turned the reins over to Joe Tucci, who’s one of the smartest CEOs in tech. Not that that would help Dell vets, but it would get the transition running smoother, sooner.

Instead we’ll likely get Mr. Dell’s usual flailing about and multiple misfires, especially as the enormity of this acquisition sinks in.

If I were a Dell vet, I’d rather jump than be pushed.

Courteous comments welcome, of course.

{ 0 comments }

Artisanal science doesn’t scale

by Robin Harris on Thursday, 8 September, 2016

Big data will overwhelm artisanal science. That’s what I conclude from a recent paper that lays out the stark statistics:

Science is a growing system, exhibiting 4% annual growth in publications and 1.8% annual growth in the number of references per publication. Together these growth factors correspond to a 12-year doubling period in the total supply of references, thereby challenging traditional methods of evaluating scientific production, from researchers to institutions.

Given 4% annual growth, the number of publications will double in ≈17 years. But 1.8% growth in references means that they will only double in ≈39 years. At some point, the number of references will fall so far behind research production that papers will become hit-or-miss affairs, unable to accurately portray the current state of knowledge.

The paper The Memory of Science: Inflation, Myopia, and the Knowledge Network by Raj K. Pana, Alexander M. Petersen, Fabio Pammolli, and Santo Fortunato, researchers from Aalto University, Finland, School for Advanced Studies, Lucca, Italy, University of California, Merced, and Indiana University, takes a long and large view. The team

. . . analyzed a citation network comprised of 837 million references produced by 32.6 million publications over the period 1965-2012, . . .

in order to analyze the trends in how research is cited and used. They noted trends that will accelerate with the growth of automated data collection and analysis.

Over this half-century period we observe a narrowing range of attention – both classic and recent literature are being cited increasingly less, pointing to the important role of socio-technical processes. . . . In particular, we show how perturbations to the growth rate of scientific output – i.e. following from the new layer of rapid online publications – affects the reference age distribution and the functionality of the vast science citation network as an aid for the search & retrieval of knowledge.

Going deep, not wide
The authors are looking at historical trends – and not looking forward to a future with Big Data and AI-driven analysis. But the historical trends are difficult enough.

They posit that science has an attention economy, and as research output has grown – especially due to China’s large investment in science and research, but also due to GDP growth everywhere – the growth of knowledge has forced researchers to narrow their focus. That is a trend since the beginning of the Enlightenment.

Here’s their illustrative chart of the process, showing how reduction in income inequality globally has led to an explosion in research and the narrowing of scientific attention:

Click to enlarge.

Click to enlarge.


The StorageMojo take
Our current system for the diffusion of knowledge is breaking down. How are we going to fix it?

Google’s mission . . . to organize the world’s information and make it universally accessible and useful may no longer be enough. The ultimate problem is that human bandwidth doesn’t scale.

Sure, Really Smart People can ingest and make sense of much more information than most people; and they may come up with new paradigms that enable us to organize knowledge in more digestible ways. But the advent of automated data generation, collection, analysis and, finally, AI-driven knowledge, will overwhelm human knowledge processing.

One solution is another layer to “virtualize” knowledge domains for interdisciplinary scrambling, creating an interface between the deep and the broad. That is, in fact, what we’ve been doing for the last 75 years or so. But it isn’t enough.

So Google – or someone else with the next money-spinning machine and an appetite for moon-shot projects – is going to have help a large group of RSPs to figure out how to artificially augment human cognition. That’s a challenge for the 21st century.

Courteous comments welcome, of course.

{ 0 comments }

Nantero NRAM: ARM’d and dangerous

by Robin Harris on Wednesday, 7 September, 2016

Intel’s 3D XPoint non-volatile RAM has sucked up most of the attention in the NVRAM space, but Nantero’s NRAM has taken a giant step forward. So far forward that Intel may get ARM’d again if they aren’t careful.
nantero-logo_web

NRAM?
Nantero is the 15 year old startup pioneering carbon nano-tube RAM, or NRAM. 15 years is a lot of pioneering, but NRAM’s promise is immense:

  • DRAM speed
  • Lower cost in volume
  • Endurance that rivals – or equals – DRAM
  • Higher density
  • Plus, of course, lower power

StorageMojo has followed Nantero for a while, but has seen enough great lab demos that fizzled in the marketplace that I’ve remained skeptical. But Fujitsu announced last week that they are productizing NRAM – the first Nantero licensee to go public – and since Fujitsu is also the world’s largest vendor of FRAM – ferromagnetic RAM – their announcement is a weighty endorsement.

Fujitsu plans to start with a 256mbit part and scale up and down from there – plus offering embedded NRAM in their fab. Given their FRAM customer base of people who have already stepped outside the DRAM/NAND box, they are well-positioned for NRAM design wins.

Also, Nantero is one of the few companies whose product specs have improved as they got closer to production. The product is better than expected. How often does that happen?

Can NRAM replace DRAM?
Maybe. The biggest uncertainty today is NRAM’s endurance. Nantero says they haven’t found a wear mechanism and have tested to 10^12 writes. But to reach DRAM’s 10^15 writes will take a couple of years to certify.

Wrinkles
Achieving DRAM density requires vertical NRAM – multiple layers of processing. Nantero says that’s not a problem because DRAM capacitors have such a high aspect ratio that they takes as many processing steps and mask layers as 8 layers of NRAM.

Achieving lower-than-DRAM pricing requires volume, and that’s where NRAM has a competitive advantage over, say, 3D XPoint. Processing can be done on today’s flash, DRAM or logic lines. NRAM processing only needs spin coating and patterning – as well as carbon nanotubes – which modern fabs all support.

The StorageMojo take
The Intel/Micron 3D XPoint announcement last year seemed rushed. At the time it seemed due to a possible Micron takeover. But I think now that the rush was because of Nantero’s NRAM.

While Fujitsu is the first public reference, Nantero, who wouldn’t reveal any names, says they have a dozen other licensees, including, I’ll wager, Apple, Samsung, HP, Lenovo, Dell/EMC, TSMC, and other major players. Since Nantero, like ARM, licenses their technology, Intel wanted to take up as many development resources for 3D XPoint as possible, leaving fewer for NRAM.

But NRAM’s advantages are many. And ARM has shown that having dozens of companies working with your technology can beat one, not matter how large and wealthy.

Courteous comments welcome, of course.

{ 1 comment }

Notes on VMworld 2016

by Robin Harris on Wednesday, 31 August, 2016

Spent the day on the show floor at Vmworld 2016 in sunny Las Vegas. Saw some interesting things.

  • Panzuraa now offers byte-range locking on their global collaboration platform. They’ve been having great success in the Autodesk Revit market. M&E seems like a natural as well. This is hard to do and few have done it well.
  • Promise Technology gave me a brief overview on their Apollo Cloud, a consumer-level private cloud focused on ease-of-use. The Apple Store sells them. A number of companies have tried, but it looks like Promise may have nailed it. I’ve asked for a review unit and will tell you what I find when I get to try it.
  • Another company says they’ve come up with a much more compuationally efficient advanced erasure coding scheme. This is the kind of stepwise enhancement to object storage that will pressure file storage over the nest decade, although the company isn’t focused on objects today. More if the details support the pitch. And yes, I forgot their name.
  • Stormagic can build you a 2-node high availability cluster with their software. Popular with branch offices.
  • A new group is forming to create useful storage performance testing tools. The idea is to ask users for traces that can be used to measure workloads on a half dozen metrics, including I/O randomness and compressibility – just to name a couple – so prospects can estimate their I/O workloads and then see how similar workloads performed on a number of storage systems. More on this as the details gel.
  • WD was showing a dense box of SAS blade SSDs (JBOS). Mounted in a rack of servers, you get shared DAS that can be sliced and diced as needed by their software. It isn’t an array, and avoids the costs of array controllers.
  • 3D XPoint is on the minds of vendors. VMware is working with its top software partners to get them ready.

The StorageMojo take
VMworld is a good storage show. A lot of creativity to make storage work better within virtualized environments.

Yet VMworld is only a part of a much larger and complex storage market. With all the choices we have now – and all the ones coming down the pike – the forecast is full employment for storage and system architects.

Courteous comments welcome, of course. WD is an advertiser on StorageMojo.

{ 0 comments }

VMworld next week

by Robin Harris on Friday, 26 August, 2016

The StorageMojo crack analyst team is busy polishing their cowboy boots and ironing their jeans to get respectable (why now?) for next week’s VMworld in Las Vegas. Las Vegas is a short – by Western standards – 4 to 5 hour drive from the high pastures of northern Arizona, and a favorite place for the boys to let off some steam.

The StorageMojo take
Looking forward to catching up with storage in the virtual world, especially after missing the Flash Memory Summit. Please leave a comment if you’d like to meet.

Courteous comments welcome, of course.

{ 0 comments }

Excel may be dangerous to your health – and your nation

by Robin Harris on Friday, 26 August, 2016

Over on ZDNet I’ve been doing a series looking at the issues we face incorporating Big Data into our digital civilization (see When Big Data is bad data, Lying scientists and the lying lies they tell, and Humans are the weak link in Big Data. I’m not done yet, but I wanted to share a couple of cautionary Excel tales.

The latest comes by way of a paper Gene name errors are widespread in the scientific literature. The researchers

. . . downloaded and screened supplementary files from 18 journals published between 2005 and 2015 using a suite of shell scripts. Excel files (.xls and.xlsx suffixes) were converted to tabular separated files (tsv) with ssconvert (v1.12.9). Each sheet within the Excel file was converted to a separate tsv file. Each column of data in the tsv file was screened for the presence of gene symbols.

Result: 20% of the papers had errors. Specifically

In total, we screened 35,175 supplementary Excel files, finding 7467 gene lists attached to 3597 published papers. We downloaded and opened each file with putative gene name errors. Ten false-positive cases were identified. We confirmed gene name errors in 987 supplementary files from 704 published articles

The cause?

The problem of Excel . . . inadvertently converting gene symbols to dates and floating-point numbers was originally described in 2004 [1]. For example, gene symbols such as SEPT2 (Septin 2) and MARCH1 [Membrane-Associated Ring Finger (C3HC4) 1, E3 Ubiquitin Protein Ligase] are converted by default to ‘2-Sep’ and ‘1-Mar’, respectively. Furthermore, RIKEN identifiers were described to be automatically converted to floating point numbers (i.e. from accession ‘2310009E13’ to ‘2.31E+13’). Since that report, we have uncovered further instances where gene symbols were converted to dates in supplementary data of recently published papers (e.g. ‘SEPT2’ converted to ‘2006/09/02’).

Suboptimal.

Nation unbuilding
Another, older, Excel misadventure occurred in Ken Rogoff’s and Carmen Reinhart’s paper, Growth in a Time of Debt, which was the intellectual justification for widespread national austerity in the last 7 years. That austerity put millions of people out of work and slowed – and in some cases reversed – economic recovery after the Great Recession.

Too bad for the unemployed who lost homes, life savings, families, and self-respect, but the academics made some key Excel mistakes that weren’t uncovered until a grad student tried to replicate their results. As this piece in The Atlantic notes the paper itself was suitably conservative, but the academics oversold their results to Congress and other policy-making bodies.

The StorageMojo take
Given that the genetic issue was first identified in 2004, it is unsettling that Microsoft, with its vast resources and world-class research organization, hasn’t been proactive in helping Excel users avoid these issues. Word has a grammar checker, and helping users avoid common mistakes seems doubly applicable to numerical data that most readers assume is correct because, after all, the computer did it.

Perhaps a smarter Excel would have noted that Rogoff failed to include five countries in the data set in the final calculations – and maybe a neural-net data checker could flag problems like that – but it isn’t the Excel team’s fault that economists oversold their faulty results. Publishing the spreadsheets along with papers – as they do in genome research – would be a help.

But the larger takeaway is that while our computers are usually accurate our human brains are riddled with cognitive and logical bugs. While Computer-Assisted-Everything has enormous potential, we must remember to keep our BS detectors tuned up and running.

Courteous comments welcome, of course.

{ 5 comments }

NetApp’s surprising Q1

August 23, 2016

NetApp’s Q1 was a happy surprise for Wall Street: earnings blew past estimates and the stock spiked over 16%. But the quarterly 8k report was more downbeat. Product revenues Net revenue was down $41 million year over year. Products the company calls Strategic – presumably hybrid cloud and flash, but not defined in the 8k […]

1 comment Read the full article →

World’s largest manufacturer of vinyl records

August 22, 2016

A story from the byways of data storage. Vinyl audio records have been making something of a comeback. Fans prefer the sound, and DJs like to “scratch” them, which is pretty cool the first hundred times you hear it. A series of pieces in the UK paper the Guardian, describes the current state of vinyl, […]

1 comment Read the full article →

Flash Memory Summit next week

August 1, 2016

And sad to say, for the first time in years, StorageMojo won’t be there. Dang it! A physical condition is cramping my style. It’s temporary and will be fixed by early next year. So I’ll be looking for whatever gets posted online, but missing the show floor. The StorageMojo take For a few years the […]

0 comments Read the full article →

A look at Symbolic IO’s patents

July 22, 2016

Maybe you saw the hype: Symbolic IO is the first computational defined storage solution solely focused on advanced computational algorithmic compute engine, which materializes and dematerializes data – effectively becoming the fastest, most dense, portable and secure, media and hardware agnostic – storage solution. Really? Dematerializes data? This amps it up from using a cloud. […]

2 comments Read the full article →

Bandwidth reduction for erasure coded storage

July 12, 2016

In response to Building fast erasure coded storage, alert readers Petros Koutoupis and Ian F. Adams noted that advanced erasure coded object storage (AECOS) isn’t typically CPU limited. The real problem is network bandwidth. It turns out that the same team that developed Hitchhiker also looked at the network issues. In the paper A Solution […]

3 comments Read the full article →

Building fast erasure coded storage

July 11, 2016

One of the decade’s grand challenges in storage is making efficient advanced erasure coded object storage (AECOS) fast enough to displace most file servers. Advanced erasure codes can give users the capability to survive four or more device failures – be they disks, SSDs, servers, or datacenters – with low capacity overhead. By low I […]

3 comments Read the full article →

The top storage challenges of the next decade

July 6, 2016

StorageMojo recently celebrated its 10th anniversary, which got me thinking about the next decade. Think of all the changes we’ve seen in the last 10 years: Cloud storage and computing that put a price on IT’s head Scale out object storage. Flash. Millions of IOPS in a few RU. Deduplication. 1,000 year optical discs. There’s […]

10 comments Read the full article →

July 4th, 2016: Mormon Canyon

July 4, 2016

July 4th is when the United States of America celebrates the signing of the Declaration of Independence. For most Americans Independence Day is the most important secular holiday of the year. Of course, July 4th wasn’t the actual date of the signing – July 2nd was – but no matter. Of greater interest is the […]

1 comment Read the full article →

Meeting young Mr. Trump

June 30, 2016

Back in 1980 I met Donald Trump. He came to a finance class to talk about real estate finance. I have no recollection of his talk. But I DO remember the visit and, given what I’ve read about Mr. Trump, some readers may find my recollection an interesting footnote. Ivana To set the scene, this […]

10 comments Read the full article →