Why startups fail

by Robin Harris on Wednesday, 21 June, 2017

A great piece at CB Insights. They collected the failure stories of 101 startups and then broke those failures into 20 categories.

Spoiler alert!
Here are the top 10 reasons for failure, as compiled by CB Insights.

Click to enlarge.

What I find interesting is that 8 of the top 10 reasons are marketing related.

  • No market need.
  • Get outcompeted.
  • Pricing, cost issues.
  • Poor product.
  • Need, lack, business model.
  • Poor marketing.
  • Ignore customers.
  • Product mis-timed.

Across the cultural divide
Tech founders tend to be techies, and techies tend to have a problem with folks of the sales/marketing persuasion. One problem is that many marketing people don’t really understand the technology they are marketing, which means they can’t be full partners to the tech team.

Another problem is that marketing people tend to be well-versed in the arts of persuasion. If the marketer takes a position, especially in regards to technology they don’t appreciate, they can easily steer the startup in the wrong direction.

Plus, every techie has a story where they’ve felt misled by a sales or marketing person, and that anger or regret can bleed into professional relationships in a startup.

Finally, techies rarely have a handle on what to look for in their marketing hires. Based on more than 35 years experience, StorageMojo has a suggestion.

The StorageMojo take
My sympathies are with the engineers when it comes to their feelings about marketing. As I said in the link above:

They’d get flayed for every decommit and slip. They’d sweat blood figuring out solutions to hundreds of subtle problems.

Then, after 2 to 3 years of effort, they’d deliver the product to marketing and, all too often, watch their hard work go for naught.

Maybe marketing missed some key features. Didn’t position the product properly. Training failed to equip the field. Mis-pricing. Tougher competition than expected.

That last paragraph captures many of the issues that CB Insights survey did. Which shouldn’t be a surprise.

Startups exist to sell a product. Development is only a means to that end.

Courteous comments welcome, of course. Disclosure: I offer services to help startups with every phase of product development.

{ 3 comments }

A transaction processing system for NVRAM

by Robin Harris on Monday, 19 June, 2017

Adapting to NVRAM is going to be a lengthy process. This was pointed out by a recent paper. More on that later.

Thankfully, Intel wildly pre-announced 3D XPoint. That has spurred OS and application vendors to consider how it might affect their products.

As we saw with the adoption of SSDs, it takes time to unravel the assumptions built into products. Take databases: they spent decades optimizing for hard drives, and when SSDs came along many of those optimizations became detrimental.

Durable transactions
On the face of it it shouldn’t be that hard. You want a durable transaction, you have persistant NVRAM. Are we good here?

Nope.

In a paper published by Microsoft Research, DUDETM: Building Durable Transactions with Decoupling for Persistent Memory, the authors (Mengxing Liu, Mingxing Zhang, Kang Chen, Xuehai Qian, Yongwei Wu, Jinglei Ren) go into the issues:

While persistent memory provides non-volatility, it is challenging for an application to ensure correct recovery from the persistent data on a system crash, namely, crash consistency. A solution . . . is using crash-consistent durable transaction[s]. . . .

Most implementations of durable transactions enforce crash consistency through logging. However, the. . . dilemma between undo and redo logging is essentially a trade-off between update redirection cost and persist ordering cost.

The authors make a bold claim:

[O]ur investigation demonstrates that it is possible to make the best of both worlds while supporting both dynamic and static transactions. The key insight of our solution is decoupling a durable transaction into three fully asynchronous steps.

Solution
To create a fully decoupled transaction system for NVRAM, the researchers made three key design decisions.

  • A single, shared, cross-transaction shadow memory.
  • An out of the box Transaction Memory.
  • A redo log as the only way to transfer updates from shadow memory to persistent memory.

These design choices enabled building an ACID transaction in three decoupled, asynchronous, steps.

  • Perform: execute the transaction in a shadow memory, and produce a redo log for the transaction.
  • Persist: flush the redo log of each transaction to persistent memory in an atomic manner.
  • Reproduce: modify original data in persistent memory according to the persisted redo log.

Performance
The paper is lengthy and a recommended read for those professionally interested in transaction processing on NVRAM. But here’s their performance summary.

Our evaluation results show that DUDETM adds guarantees of crash consistency and durability to TinySTM by adding only 7.4% ∼ 24.6% overhead, and is 1.7× to 4.4× faster than existing works Mnemosyne and NVML.

The StorageMojo take
As we’ve seen with the transition from hard drives to SSDs, unwinding decades of engineered-in assumptions in the rest of stack is a matter of years, not months. There’s the issue of rearchitecting basic systems, such as transaction processing, or databases, and then the hard work of stepwise enhancement of those new architectures as we gain knowledge about how they intersect with the new technology and workloads.

There are going to be many opportunities for startups that focus on NVRAM. The technology is coming quickly and with more technology diversity – there are several types of NVRAM already available, with more on the way, and each has different trade-offs – which means that the opportunities for creativity are legion.

Courteous comments welcome, of course.

{ 2 comments }

A distributed fabric for rack scale computing

by Robin Harris on Monday, 12 June, 2017

After years of skepticism about rack scale design (RSD), StorageMojo is coming around to the idea that could will work. It’s still a lab project, but researchers are making serious progress on the architectural issues.

For example, in a recent paper, XFabric: A Reconfigurable In-Rack Network for Rack-Scale Computers Microsoft Researchers Sergey Legtchenko, Nicholas Chen, Hugh Williams, Daniel Cletheroe, Antony Rowstron, and Xiaohan Zhao, discuss

. . . a rack-scale network that reconfigures the topology and uplink placement using a circuit-switched physical layer over which SoCs perform packet switching. To satisfy tight power and space requirements in the rack, XFabric does not use a single large circuit switch, instead relying on a set of independent smaller circuit switches.

The network problem
My concerns around RSD have always centered on the network. It’s obvious that Moore’s Law is making more powerful and efficient Systems on a Chip (SoCs) more attractive. And flash has eliminated many issues around storage, particularly power, cooling, weight, and density – while cost is steadily improving.

Which leaves the network. Network bandwidth is much more costly than internal server bandwidth, and, due to the bursty nature of traffic, much more likely to constrain overall system performance.

Which, in a nutshell, is the business justification for hyperconverged infrastructure: blocks of compute, memory & storage using cheap internal bandwidth; with Ethernet interconnecting the blocks. But today we can have a couple of thousand microservers in a rack.

Now if we could only figure out how to network them at reasonable cost and performance. Traditional Top-of-Rack (ToR) switches are costly and don’t scale well.

Higher server density requires a redesign of the in-rack network. A fully provisioned 40 Gbps network with 300 SoCs would require a ToR switch with 12 Tbps of bisection bandwidth within a rack enclosure which imposes power, cooling and physical space constraints.

Fully distributed networks are much cheaper, but inflexible. That’s why HPE’s Moonshot uses three network topologies, one for ingress/egress traffic, multi-hop for storage and a 2D torus fabric for in-rack traffic.

The XFabric answer
With XFabric the MR team decided to split the difference.

. . . XFabric uses partial reconfigurability. It partitions the physical layer into a set of smaller independent circuit switches such that each SoC has a port attached to each partition. Packets can be routed between the partitions by the packet switches embedded in the SoCs. The partitioning significantly reduces the circuit switch port requirements enabling a single cross point switch ASIC to be used per partition. This makes XFabric deployable in a rack at reasonable cost.

Of course, you then have to deal with the fact that the fabric is not fully configurable. Which is the XFabric secret sauce.

XFabric uses a novel topology generation algorithm that is optimized to generate a topology and determine which circuits should be established per partition. It also generates the appropriate forwarding tables for each SoC packet switch. The algorithm is efficient, and XFabric can instantiate topologies frequently, e.g. every second at a scale of hundreds of SoCs, if required.

The team modeled XFabric on a test bed and the results were stunning:

The results show that under realistic workload assumptions, the performance of XFabric is up to six times better than a static 3D-Torus topology at rack scale. We also show it provides comparable performance to a fully reconfigurable network while consuming five times less power.

The StorageMojo take
With the work being done on PCIe fabrics, I/O stack routing, composable infrastructure, and resiliance in distributed storage, we are reaching a critical mass of basic research that points to a paradigm-busting architecture for RSD. In 10 years today’s state-of-the-art hyperconverged systems will look like a Model T Ford sitting next to a LaFerrari Aperta.

A key implication of RSD is that it will favor warehouse scale systems. That’s good news for cloud vendors.

But if RSD is as configurable as the current products and research suggests, it will also find a home in the enterprise. The tension that exists today between object storage in the cloud and object storage in the enterprise will govern enterprise adoption.

But that’s a topic for another post.

Courteous comments welcome, of course.

{ 0 comments }

Infinidat sweetens All Flash Array Challenge

by Robin Harris on Tuesday, 6 June, 2017

In response to yesterday’s StorageMojo post on Infinidat, Brian Carmody of Infinidat tweeted:

Thanks, Brian!

The StorageMojo take
Verde Valley Sanctuary is a fine organization that StorageMojo has supported for years. I’d love to see them get much needed support from StorageMojo readers who take the Faster than all flash challenge.

If you’re evaluating all flash arrays, give Infinidat a chance – and StorageMojo’s favorite charity a boost – by taking Infinidat up on their challenge. Mention the StorageMojo post when you sign up for the challenge and we’re good.

And let me know how it goes – win, lose or draw – and I’ll be happy to publish your experiences.

Courteous comments welcome, of course.

{ 0 comments }

Infinidat’s sweet AFA challenge

by Robin Harris on Monday, 5 June, 2017

StorageMojo has observed, many times, that great marketing of a mediocre product beats mediocre marketing of a great product all the time. Thus it is always of interest when someone comes up with an innovative marketing wrinkle.

That’s what Infinidat has done with their Faster than all flash challenge. Their claim is that their system will outperform any all flash array (generally available prior to 1-1-2017) or they donate $10,000 to the charity of your choice.

A safe bet?
Infinidat says their system beats AFAs because:

  • Massive parallelism.
  • Machine learning to optimize caching.
  • DRAM is used to cache hot data.
  • Missed cache hits are usually serviced by NAND.

In theory, they might have some longer-tailed I/Os than an AFA. But if most of the workload is being served out of DRAM, then sure, they should get MUCH higher performance. I doubt they’ll be donating to many charities.

The StorageMojo take
Good marketing does a credible job of presenting and resolving customer pain points. Great marketing links the product to something greater, something meaningful, and taps into the powerful human desire to make a difference. This does both.

StorageMojo supports the Verde Valley Sanctuary, and I’d love to see them get $10k from Infinidat. But I don’t have an enterprise workload to try them out on.

If you have, or will, take Infinidat up on their offer, please drop StorageMojo a line on how it goes. What’s the worst that can happen?

Courteous comments welcome, of course.
Update: From Infinidat’s Drew Schlussel:

Just to clarify, INFINIDAT will make a donation regardless of who wins. The fine print is that the “winner” determines where the donation goes. It is the best example of “it’s all good”.

End of update.

{ 5 comments }

Hike blogging: Devils Creek Road

by Robin Harris on Saturday, 3 June, 2017

Taking a vacation from the usual slog in NoAZ. I’m some 60 miles north of Seattle, working on my rain tan.

The weatherman claims we’ll break 70 degrees sometime during my visit, but I’m not counting on it. Occasional patches of blue sky remind me of what is possible, if not likely.

Took a 4.5 mile hike up an old logging road yesterday called Devils Creek Road. If the clouds had cooperated I would have had a beautiful view of Mt. Baker, one of the Cascade range volcanos that includes Mt. Rainier and Mt. St. Helens. But no-o-o!

But you can see some of the snow capped ridges in front of the mountain in this photo:

Click to enlarge.

However beautiful the mountains here are, my favorite place here is the Skagit Valley, which you can see some of in the left half of this panorama. Imagine a table-flat plain with rounded hills rising up on it and surrounding it. Amazing.

Click to enlarge.

Note: Couldn’t upload the full size pano. WordPress couldn’t handle it.

Enjoy!

The StorageMojo take
Plan to head out to Friday Harbor next week. Sorry to be missing HPE Discover, but the San Juan Islands and the Skagit Valley will have to do.

{ 0 comments }

Routing the I/O stack

May 30, 2017

Lots of energy around the concept of Rack Scale Design (Intel’s nomenclature) in systems design these days. Instead of depositing a cpu, memory, I/O, and storage on a single motherboard, why not have a rack of each, interconnected over a high-bandwidth, low-latency network – PCIe is favored today – and use software to define bundles […]

6 comments Read the full article →

Liqid’s composable infrastructure

May 8, 2017

The technology wheel is turning again. Yesterday it was converged and hyperconverged infrastructure. Tomorrow it’s composable infrastructure. Check out Liqid a software-and-some-hardware company that I met at NAB. The software – Element – enables you to configure custom servers from hardware pools of compute, network, and, of course, storage. I met Liqid co-founder Sumit Puri […]

1 comment Read the full article →

NAB 2017 storage roundup

May 4, 2017

Spent two days at the annual National Association of Broadcasters (NAB) confab in Las Vegas. With 4k video everywhere, storage was a hot topic as well. Here’s what caught my eye. Object storage – often optimized for large files – continues to be a growth area. Scality, Dynamic Data Pool, Object Matrix, HGST, Data IO, […]

0 comments Read the full article →

Is NetApp still doomed?

April 20, 2017

A reader wrote to ask for the StorageMojo take on NetApp now, as opposed to the assessment in How doomed is NetApp? two years ago. Q3 had some good news for NetApp. In their latest 10Q filing, they noted that while revenues for the first 9 months of the year were down 3%, for the […]

2 comments Read the full article →

Spin Transfer Technologies: next up in the MRAM race

April 19, 2017

MRAM technology is hot. I’ve written about Everspin – they’ve been shipping for years and just IPO’d – and now I’d like to introduce Spin Transfer Technologies. They’ve kept a low profile – they AREN’T shipping, are sampling protos, and they do have some nice Powerpoints. I spoke to their CEO, Barry Hoberman, and the […]

2 comments Read the full article →

Sizing the overconfig effect on the array market

March 30, 2017

For decades customers routinely overconfigured storage arrays to get performance. Customers bought the most costly hard drives – 15k SAS or FC – at huge markups. Then they’d short stroke the already limited capacity of these high cost drives – turning a 900GB drive into a, say, 300GB drive – in order to goose IOPS […]

3 comments Read the full article →

Hike blogging: Deadmans Pass panorama

March 25, 2017

I’ve been hiking a lot the last couple of weeks, getting in shape after a long hiatus. Today I took a loop that I’ve never done counterclockwise and even though it shouldn’t have made much difference, it was a much more enjoyable hike. The loop began with an easy walk of about a mile on […]

0 comments Read the full article →

DSSD’s demise

March 22, 2017

A couple of weeks ago Dell EMC announced the demise of the once promising DSSD all flash array. They are planning to incorporate DSSD technology into their other products. As StorageMojo noted 4 years ago, DSSD developed a lot of great technology. But for whatever reason – perhaps turmoil associated with Dell’s purchase of EMC? […]

1 comment Read the full article →

Avere closes new round – with a twist

March 21, 2017

Avere announced this afternoon that they’ve closed a Series E round of $14 million, bringing their total funding to a cool $97 million. Existing investors Menlo Ventures, Norwest Venture Partners, Lightspeed Venture Partners, Tenaya Capital and Western Digital Capital all re-upped, always a good sign. But the twist? Google Inc. joined the round. The StorageMojo […]

0 comments Read the full article →