EMC’s Chuck Hollis blogged about The Vendor Beating a couple of months ago. The unspoken question in the post is “how do we understand what customers are telling us?”
He writes
As an employee of a large IT vendor, I’ve been at the receiving end of a reasonable number of vendor beatings.
Occasionally it’s richly deserved. But, sometimes, it’s masking a deeper set of issues that have very little to do any vendor whatsoever.
Unhappy customers, like unhappy families, are all unhappy in their own way. This customer appeared to be overstaffed, under-skilled and poorly managed.
Interpretation
Interpreting customer complaints and behavior is hard. When companies can’t decipher what customers want – which is usually what the company isn’t selling – it is easy and dangerous to tune them out.
Customers can tell you things about your company and products that you can’t directly discover for yourself, but what customers say may be different from what they think. And both are influenced by the customer’s context, which can include company politics, prior vendor experiences, knowledge deficits and employee level.
Diagnosis
Steve Jobs once said that customers don’t know what they want until you show it to them. Customers know what would improve the current product in the current use case, but they can’t imagine bringing multiple novel technologies to bear on a much broader problem.
Tablet computers flopped for years until the iPad crystalized the market. Everyone saw the tablet problems: thick; heavy; slow; clunky UI; poor battery life; and, thanks to low volumes, cost. Incremental improvements – faster processors, more RAM, larger disks – didn’t help.
Tablets required a deep rethinking and application of several novel technologies – flash, gestures, CNC case milling, an app store and an energy-efficient OS – to create a compelling user experience.
The iPad illustrates the problem of listening to customers: they described symptoms and suggest fixes, but couldn’t articulate the underlying problem: how the use case differs from desktop and notebook PCs. That requires an act of imagination, not transcription.
The StorageMojo take
In Chuck’s post an EMC presales engineer identified the root cause of the customer’s pain:
. . . the database environment had grown willy-nilly over the years — it wasn’t laid out well, the queries weren’t particularly well written, and so on.
Sure, there were things we could do on the storage side (e.g. faster storage, better layouts, etc.), but it was a bigger issue than just storage performance.
But the larger question is: with high-speed and high-capacity SSDs, why isn’t this customer moving to an infrastructure that doesn’t need this fancy tuning? EMC can’t manage the fight between DBAs and storage admins, but they could be making it less contentious.
From within the EMC ecosystem the solution is clear: more training, professional services and faster gear. But from the outside the question is: who is building “it just works” high performance storage?
Courteous comments welcome, of course. I admire Tucci’s innovative EMC business model: outbid everyone else for chasm-crossing companies; give them global distribution and support; and watch the bucks roll in. It may not be innovative technically but it is innovative.
I agree! I said as much on another one of Chuck’s posts about how they’ve won awards for best professional services and stuff.
The current company I’m at just invested in some new storage(they had nothing before). It’s not a big company, by any stretch but the value per transaction is quite high. We’re already at more revenue this year than if you were to take all of my previous companies combined, their single biggest revenue year – the company I’m at is bigger than that. Same goes for profits, more than all my previous companies combined.
Our first problem is the current system (EC2 & RDS) provides no usable insight into the amount of I/O on the system. There are I/O metrics but the numbers are are so senseless(e.g. 4,000 IOPS to transfer 250kB) that they must be wrong(and conference calls with the people behind the stuff have proven fruitless thus far). Though for the most part I/O is pretty low.
Before I started at the company the VAR/Reseller I pointed them to had directed them towards NetApp for storage. The basic line of thinking was “You’re using VMware? You want NetApp”. I shuddered at the thought of using NetApp but I was (and still am) willing to use it, it’s not terrible but it is inefficient and inflexible. I of course have a strong 5 year background on 3PAR. (Can’t believe it’s been that long! I think I got my first 3PAR E200 box in for eval in January 2007)
Time went on and the boss got wind of what’s possible with 3PAR, and I gave him tons of technical reasons why I would choose 3PAR over most anyone else out there, we met with the local team (not anyone I knew since I recently moved to the area), and was working through a different reseller (whom by the way apparently is friends with you Robin, his first name is Carl). Initial pricing for 3PAR was quite a bit higher than the NetApp quote we had but the boss was still impressed and directed NetApp to change their configuration for a more apples to apples comparison which they did, and still came in slightly cheaper. Though the boss decided to go with 3PAR (which shocked me when I heard it). At the end of the day he increased the storage budget by something like 40% in order to get 3PAR.
From a ‘just works’ perspective the deep virtualization of the system allowing on the fly changes to pretty much any aspect of data layout during the life cycle of the volumes was a key ability. Another was not needing to do any sort of manual load balancing between controllers. The thought of having to do that is just so foreign to me, I mean I’ve been using full mesh active controllers for so long I forget that the technology to distribute i/o over every controller, port and disk transparently is the exception rather than the norm.
NetApp was forcing us into RAID DP and we had to take the overhead of that as well as very large data+parity ratios. Then there was the looming question of what data+parity ratio do we use? Do we use separate RAID groups for production and non production, knowing that if we do separate them we do reduce aggregate performance of the box? All of my 3PAR boxes have always run production and non production on the same systems – no issues. Ideally if there is budget I’d love to have separate physical systems but there hasn’t been budget. I’d rather have one good 3PAR box then two crappy boxes from someone else. If those big vmware cloud providers that use 3PAR for multi tenant workloads I sure as hell can do the same (and have been for 5 years) on my systems.
3PAR protects(by default) the failure of an entire shelf of storage w/o data loss. NetApp couldn’t do that with the configuration they had. If we have some volumes that need RAID 10, some that need RAID 5, we can do all that on a per-volume basis and not have to waste space or I/O by dedicating spindles to certain purposes. Fast parallel RAID rebuilds gives low latency recovery from failure. We could even technically, if we wanted to protect some volumes with RAID 6 but that is really a waste of space and I/O on the 3PAR platform especially with 15k RPM disks.
The NetApp argument was more on the software side, where they are stronger (of course). De-duplication of primary data was a main driver there, array based replication and global, centralized deduplication for backup purposes (many:1). The VAR pushing NetApp (whom is my favorite VAR btw their strategy with this seemed odd to say the least though) made some good points, points I have heard many times in the past. Points that for the most part won’t apply to my current company they are too small to be able to really leverage that(the global stuff at least we may have 3 or 4 sites at the most in the next 2 years).
I had a simple question for them, the same question I asked HDS back in 2008 – given the number of disks, the RAID type, the controllers, cache etc how many IOPS can this system deliver? The only real number the VAR pushing NetApp would give us is the number of IOPS from a physical disk perspective which they said was 2,400 IOPS for 48x15k RPM disks.That seemed to be incredibly low but they weren’t willing or able to give a number other than from that perspective. HDS was somewhat similar back in 2008 I asked them for 6 weeks many times to give me that number and they could or would not. I kept asking them at the time and the VAR pushing NetApp here how can you spec a system out without knowing this number? No real answer.
They tried to shift the focus back to the NetApp strategy of “everyone uses the same disks, there are no magical disks”. Maybe not but large amounts of cache (even NetApp’s PAM), distributed sub disk RAID and RAID type have a massive influence on performance.
It’s probably more common on 3PAR than anywhere else – the usage of something like RAID 5 3+1. Take a 200-disk array with traditional whole-disk RAID(no hot spares for simplicity). If your using 3+1 then you have a full 50 disks dedicated to parity, that’s a lot! I wouldn’t expect an EMC or HDS or IBM to promote such a configuration because such a large percentage of spindles are “lost” to parity. But this configuration is quite common on 3PAR, there are no spindles lost to parity because parity is on every disk, you lose some amount of usable storage vs higher ratios, but the performance you get is very close to that of RAID 10 on other platforms. So you get the speed of RAID 10 with the space of RAID 5, and in our case anyways we maintain the ability to lose an entire shelf of disks without impacting data availability, because the number of shelves required to achieve this is much lower than if you were doing 8+1 or 14+2 or whatever.
At the end of the day I told the boss – if we ever grow to the size that can really leverage that global NetApp technology and we decide we want to leverage it then we can just go get a V-series (or multiple boxes for multiple sites). I don’t anticipate that ever happening but the option is there. If you think that NetApp has the best software then we can combine the best software with the best hardware and get a pretty good solution. I know several companies who have migrated off of NetApp disks to V-series in front of 3PAR storage. The cost is certainly not low. 3PAR is a premium product and NetApp charges a hefty premium for their V-series as well but for some companies the cost isn’t as important as others.
One place I was told recently invested more than a quarter million dollars in a 3PAR box for storing under 100GB (gigabytes) of data. The value of the data was so high it more than offset the cost of the storage array. To me that sounds crazy but to them it wasn’t.
Another place I know has a hard on for the HP P9500 (HDS USP/VSP OEM) because of the uptime it offers, they were burned in the past by the likes of smaller storage companies and have been scared into the highest of the high end for both tier 1 and tier 2. The cost of those platforms is quite high though and they are getting pressure from other parts of the company to reduce their costs by a large amount so they will be forced to re-evaluate their situation in the near future.
On my most recent storage engagement I tried to reach out to one of those newer SSD-only companies (forgot which) but communications fell apart and the sales rep never followed up. Either they’re really busy with other customers or perhaps just a bad rep. In any case the bosses where I’m at weren’t on the prowl for the cheapest thing they could find, they wanted the best they could afford. I think they got it for the most part.
I have a meeting with NetApp next week to try to figure out where they went wrong. They told me last week that they still don’t do eval systems unless your talking about multi million dollar deals. Which is shocking to me. They said if they had a deal that was dependent on an eval they wouldn’t of been able to participate.
What I like 3PAR most for however is it’s ability to deal with the unknown – the flexibility of the system to adapt to change because really how many people out there know what their i/o profile is? Most common storage boxes simply don’t give that kind of information, and I can’t get that information from the cloud. There’s very little planning (outside of physical resources when you buy the thing) needed, it just runs.
My biggest example of this was re-striping ~120TB of raw data across a 3PAR T400 because I was in a hurry when I initially configured the big cluster file system that was on the array and was assured the file system was thin provioned. I allocated more space on the FS than we had available on the array. But I did so knowing that if I screwed it up I could go back and change it later without impacting the applications. The re-striping took longer than I thought, our apps hammered the array 24/7. It took about 5 months to go from RAID 5 3+1 to 5+1 – 24 hours a day 7 days a week. But we made it. No noticeable application impact, no complaints.
3PAR is far from perfect of course, I do want them to get technology similar to EMC’s FAST (the one technical innovative thing I see from EMC), leveraging flash more effectively to handle real time changes in I/O patterns. I hammered David Scott himself on this a couple of months ago – didn’t get a good answer.
There are places where 3PAR has fallen short of my expectations, where they have unofficially promised things for me but have not delivered, not too uncommon. But as a building block for storage I still think they are the best thing out there for the vast majority of scenarios – assuming you can afford it.
One of my friends is convinced I will latch on to one of these new SSD startups in the next year or two and leave 3PAR behind – he may be right, I do like technology though storage is a pretty conservative space it takes a while to get things right, and while on paper the performance and claims of these all SSD startups are very impressive – I think they need quite a bit more time in the field before companies like mine rely on them for their primary storage in production.
I’ve been saying for a long time when it comes to SSD – these big enterprise storage companies have got to get their controller performance up by an order of magnitude. So far it’s not looking promising. 3PAR did OK by doubling the performance (again), HDS not so much, and EMC refuses to release performance results, I suspect they are no better than HDS when it comes to performance given their apparent reliant on 4-5 year old CPU technology to drive their latest and greatest VMAX.
Nate, thanks for a great comment. I’m a fan of your blog, techopsguys.com/.
I especially like your comment: “. . . when it comes to SSD – these big enterprise storage companies have got to get their controller performance up by an order of magnitude. So far it’s not looking promising.” Once they do they’ll tell everyone that SSDs are ready for prime time.
Robin
> the queries weren’t particularly well written …
well, bad queries can make a system 100x to 1,000,000x slower.
If that’s the case, even SSD won’t be enough.
Even if some managers prefer to spend money on hardware instead of DBA expertise, at least ask a DBA to identify the TOP10 bad queries first.
@nate: it sounds like you are either a 3PAR shill, or your “favorite NetApp VAR” is completely clueless. 2400 IOPS out of 48 drives? It’s closer to 8k.
Parity – that argument is ridiculous. You lose the *EXACT* same amount of parity from 3PAR as you do from NetApp or anyone else for the same protection. “Spreading it across disks” (exactly what everyone besides NetApp does) isn’t any different than dedicating parity disks when it comes to space consumption.
Your entire post reads like a marketing article, without any actual factual evidence. What exactly is the “flexibility” 3PAR gave you? What specific feature enables this flexibility you keep talking about but don’t actually describe with anything resembling a technical description? Outside of their thin provisioning which was unique to “enterprise” (I use that term loosely as 3PAR falls somewhere between modular and enterprise) storage for about a year when it first came out… I’ve yet to see a feature 3PAR has that hasn’t been matched or exceeded by their competition; In their opinion VMAX and VSP, in mine it is a tier 1.5 storage array.
Anyone that releases a new model in 2011 that still relies on FC drives has their head buried in the sand.
–A clued storage admin
Hey TimC!
I get that a lot, there’s only 2 different technologies that I am very passionate about (used to be 3, 3rd being vmware which has been busy pissing in my mouth with their licensing changes), and one of them is 3PAR. I totally understand if I come off looking as being compensated by them as a result, I can’t really do much other than say I’m not.
Per IOPS I think the number you give is somewhat more reasonable. The system we ended up going with was a 32x15k RPM 600GB F200 with 5000(worst case) – 6500 (best case) IOPS 80/20 (read/write) RAID 5 3+1, though if we happen to have a workload that is write intensive I can put that on RAID 10 without dedicating any spindles to it. 4 half populated shelves gives us the ability to survive a shelf failure without losing data.
I confirmed with 3PAR that unofficially at least with an 8-shelf configuration with RAID 6+2 you can in fact lose 2 full shelves of disks w/o data loss(on their big boxes that means up to 80 disks), something I suspected for more than a year but they were not able to technically confirm to me. They said they won’t put that on paper for whatever reason (they are sticking to losing a single shelf). How many NetApp systems out there run RAID 6+2 ? Not that I plan to use RAID DP, it’s a waste of space and I/O with 3PAR technology(google “do you really need raid 6” for my article on that)
The VAR is not clueless, I know that much just..arrogant? I’m not sure if that is the right word. From a technical perspective they are the strongest (by FAR) of any VAR I have ever worked with, and they are known for their technical skill that is where their value is over other VARs. I won’t try to convince you more of that if you sat down with any part of their tech team you’d get the same impression pretty quick ! 🙂 They are one of the biggest resellers of NetApp on the west coast(if not the biggest I’m not sure). Now the SALES guy on the account for the VAR is not very good (I told him as much he pleaded for another chance, though he has been lucky enough to have enough sales to be the best rep in the company as far as revenue and profit goes). But the technical side is very, very solid. (even if we don’t agree on everything).
My lunch with NetApp is today where they will try to determine where their process went wrong.
But anyways per RAID your statement is not quite accurate. With NetApp your forced into RAID DP, with 3PAR and others I am not. Per space overhead your correct if I was to configure RAID 6+2 on 3PAR it would consume the same space as NetApp. That’s not the point though, the point is the parity is evenly distributed over every spindle on the system. There are no dedicated parity disks, no dedicated spare disks.
I can’t go to NetApp and say I want to have 4 shelves and run RAID 5 3+1. I can’t go to NetApp and say I want a small array with only 2 shelves and I want to run RAID 10, speakin’ of which.
I built another configuration for our 2nd array which my boss wants to put in Europe. He was asking for half the I/O. The default decision was hey let’s just get half the disks as before (these are small arrays as I said before less than 50 disks). Instead I proposed we keep the same number of disks, change them to SATA, change to RAID 10. That gives us about the same amount of usable storage(maybe a bit more), half the I/O and we maintain the ability to lose an entire shelf of disks w/o data loss or downtime (from a data availability perspective at least, depending on I/O losing half your disks may be too much – the load at this site is expected to be tiny).
I think I mentioned it in the original post, if I ever feel I really need the features of NetApp I can always go buy a V-series later. My last V-series experience (my only NetApp experience) was brief and wasn’t very positive(seemed way too complicated), and the company that has it has already bought a 4-node X9000 NAS cluster to replace it because it couldn’t perform (on paper at least it (V3160) had twice the I/O capacity of the NAS cluster it was going to replace (2 node Exanet cluster). But their application load just overwhelmed it. I know several companies who have consolidated multiple NetApp systems behind a V-series on a 3PAR, customers that won’t use NetApp disks even if it was free because it’s so bad. But they like the software features. So it’s nice they have the option to use V-series. NetApp of course tries to leverage V-series as a foot in the door to sell the back end disk too, which happened to the company I bought the V3160 for, the NetApp sales team threatened that NetApp was going to discontinue support for 3PAR and my former company immediately cut all contact with NetApp and went to HP (which I’m proud they took that stance even though everyone on my former team at that company is gone with the company on the verge of collapse). The Seattle NetApp team doesn’t do their company any favors from a customer relationship standpoint. They close a bunch a deals because they are a bunch of ruthless arrogant guys, on par with the likes of EMC. I stopped talking to them years ago and refuse to, any NetApp stuff has to be buffered by my VAR now. I know a lot more about the NetApp team there than a normal customer would.
Myself I like the option that’s there but I don’t think I’ll ever use it. I’d pay double the cost for an Exanet or even a BlueArc cluster before I would pay for NetApp. But unfortunately for me Exanet is Dell-only and BlueArc is HDS-only, maybe they still use NetApp engenio(used to be LSI) disks I’m not sure.
The NetApp sales guy I am having lunch with today is a newbie he just joined a few months ago from Symantec. One of the first things he said was “well we didn’t get to be an enterprise storage company by..” and I said “Yeah HDS is an enterprise storage company too and I won’t touch their stuff either”. Kinda funny (full disclosure – I would strongly consider the USP/VSP if I worked for a bank or something, otherwise it’s too expensive for what it does(for the companies I work for) and the AMS just isn’t good enough). The discussion with the SE will be interesting.
I think NetApp has an OK platform, as I told them I bash a few different storage companies on my blog and I really have not bashed NetApp for a reason. I’ve poked fun at them, but haven’t come out and said “NetApp sucks” (which I basically have for the likes of Equallogic and Pillar for example). It’s just for me, it’s not my 1st (3PAR) or 2nd (maybe Compellent), (3rd place is up in the air) for SAN. I wish the SAN-attached NAS market was more healthy with pretty much everyone being acquired and limited to one type of storage, or gone out of business.
Weird I’m watching a CNBC commercial that focuses on my company at the moment. Not used to that!
Just an update – the meeting with NetApp went well, the SE that came along is a really smart and nice guy. A real straight shooter I like that. In our 90 minute meeting the word ‘de duplication’ never came up once which was refreshing. I was able to get a lot of my technical questions answered (my boss too). We had some good side discussions on hardware design and architecture (their use of multi core cpus and artificially handicapping their 3000 series with dual core processors), the prospects for their vmware-based ONTAP-v, their LSI storage that they bought, how they lay their systems out. We also touched on why NetApp doesn’t offer things like RAID 5 or RAID 10, recovery times from failure.
What struck me the most though is still the level of planning that goes into deploying one of their systems. They have something like a dozen different levels of RAID DP, some of which require special engineering approval(like 28+2). I asked how do you determine what level to use? Their best common practice seems to be 14+2, but that can change depending on several factors including future data growth and I/O needs. When he started talking about that I got flash backs to a former co-worker of mine using Visio and Excel to lay out EMC Clarrion arrays — when I saw him doing that (2003-2004), I told myself “I don’t want to get involved in storage”. Of course the NetApp way is simpler than excel+visio, but it’s still a lot more work (and you need more information which for the fast growing small companies I work for such growth information is really not available, a feature change in the product could radically change I/O and space requirements virtually overnight in extreme cases). You can, of course go without such planning but your efficiency on the system(s) will go down and you’ll likely end up paying more than you need because the system wasn’t laid out in an optimal fashion. Which to many organizations I’m sure is not uncommon because that is the norm rather than the exception. For me that has always been the exception, never the norm.
They never once tried to shift the conversation over to the software side other than to say their view is the RAID stuff is boring and they usually like to talk about the more interesting things. Which I can understand, their disk technology is quite limited so there’s only so much you can talk about it with.
I learned a lot. What I did learn didn’t make me want to use their disks any more, in fact maybe a bit less now that I know more.
Ironically enough as I was walking into the office someone from Nimbus data called me back, I was interacting with someone over there back in September and they just stopped reaching out to me(I was trying to schedule some sort of on site presentation, right around the time of VMworld), so I gave up on trying to get them involved in my projects. The new person apologized and said they are the new contact for me. We’ll be talking more next month to get more info on that technology. I’m set for storage for a while but I am interesting in hearing more about their stuff, their web site doesn’t get into much detail.
I went on to LinkedIn and looked up the guy I was dealing with before at Nimbus, thinking it was some sales rep that quit or something and turns out it was the CEO of the company! Wow that is not what I’m used to. They must be really small! The new “rep” apparently is the head of marketing (assuming LinkedIn profile is accurate in both cases).
Oh, and before I forget again thanks for reading my blog Robin!!
Just out of curiosity, when did Netapp not support RAID 10, 5 or 4? I’m fairly certain FAS arrays support all these raid levels (don’t get me wrong, there are benefits to WAFL and for risk reason raid 5 and 4 are kind of silly). What makes you say I can’t do a larger or smaller parity pool with RAID DP (For the record I’m not the hugest Netapp fan).
Low-cost hardware and commoditization will eat up storage vendors lunch as they did in other markets. Nowadays, cheap Adaptec or LSI RAID controller offer good raw performance, proper SSD support, impressive IOPS and enterprise features like redundant dual controllers.
I’ve set up for a few thousand euros a server that provides up to 180000 IOPS to an Oracle database, using an 8 SSD RAID-5 array. Sure, it isn’t much redundant, but it’s really cheap. In fact the bottleneck is in the application now (not enough parallelism); the storage performance puts the previous EMC behemoth to shame (200 FC drives RAID-10 …)
John – RAID 4 is still supported, I asked NetApp about RAID 1 and RAID 5 and the engineer didn’t have a firm answer but just suspected it was too hard to put into the code at this point since the system was engineered around RAID 4 originally, RAID 6 is basically two RAID 4 arrays smashed together more or less, I don’t know why RAID 5 couldn’t be treated the same way.
I have read a doc or two that talked about the option of using RAID 1 on the NetApp “system disks” that store the Ontap OS/meta data, from what I recall it was not supported (but technically possible at least with some versions of Ontap) but provided a way for smaller systems to use less spindles for that portion of the array. I haven’t seen anything anywhere that mentions using anything other than RAID 4 or RAID 6 for “regular disks” (disks not having system data on them).
Other interesting tidbits I learned from the guy that I did not know before, is if you run your controllers active/active then you need at least 3 disks set aside for the 2nd controller(forgot why, if we were going with NetApp I was going to opt for active/passive since I didn’t want to have to wrestle with manual load balancing and I recall hearing stories from former co-workers that had major issues with active-active – something with the NVRAM overflowing and crashing the system so they went active/passive to eliminate that from happening). Another tidbit was that they have a more advanced disk fault detection system but requires at least 2 hot spares in order to utilize it (forgot what the name of it was the meeting was a few weeks ago).
Per low cost commodity arrays I think the trend in recent years has been the opposite – towards higher end more feature rich, higher availability systems and consolidating more on fewer storage systems. Same goes for servers, it seems much more common these days to buy fewer larger servers and consolidate more workloads on them.
Myself I shudder at the thought of having many small islands of storage rather than a few continents of storage, much more complex to manage and less efficient in the vast majority of cases.
There are exceptions of course if you really know your workload well, and/or can tolerate more downtime and/or have fancy application level high availability – then the low cost stuff can work fine.
I met with Nimble storage a few weeks ago too briefly and heard their story – I had read about them on occasion in the past but the web sites and stuff never really have enough detail for me. The engineer was very smart and knowledgeable. The level of availability in their array isn’t what I’m used to on a 3PAR box (the architecture of Nimble is very much mid range oriented I could tell they were really aiming at Equallogic customers), I drilled them asking them why don’t they dump the contents of the cache when there is a power failure to their solid state media, and they said they never write ‘data’ to the solid state media, so once the battery for the cache expires the data is lost (which is not uncommon among low end systems) but I figured hey you got this fast media you can dump your cache to why not use it? Later that night the engineer emailed me saying that very day a document was circulating at the company talking about a minor redesign of the product that does exactly that – if the power goes out the cache is flushed to the SSD(s) and then the system can be off indefinitely without any data loss, which was cool, good timing!
The company I’m at now is small enough that there isn’t a lot of room for testing things, we have a small amount of data but it’s very valuable, so unlike a few previous positions where we had many tiers of installations this company has few.
I think I’m meeting with Nimbus later this month or next to talk to them, so many interesting companies in the bay area that I’m not used to having access to directly after having been based in Seattle for a decade.