The course has a couple of hairpin turns and
a few spots where you can let the car get up to full speed. Which means plenty of great
spots for passing and some great racing.
The cars themselves are electric, which means you get this almost Zen
like racing experience that is near silent but still pretty intense. Every once in a while a small piece of tire will fly up and hit your visor just to remind you how cool this is.
After a 16 lap qualifying round, I was seeded just behind
Jay in the 6th spot for the final. After about four laps, it was Jay and I battling
for 2nd place. (The guy in
first was ahead of us by a good bit.)
Jay was in second and I was literally riding his bumper in 3rd. It stayed this way until the final turn of
the final lap, where I was able to turn inside of Jay and get a half car lead
on him as we crossed the finish line.
You end up with this
picture of us on the podium. Yes – they have
a winners podium at the place. And yes I’m
one place higher than Jay. Congrats to Tom (@hypervfan) on first place.
Just wanted to give Jay a hard time about this and share a
little bit about how fun this whole thing was. I'm actually wearing this medal different places around town.
Bilbo Baggins was middle aged and a bit rebellious for a hobbit, but had never been more than a few miles from his home. His life completely changes when he begins an unexpected and transformational journey across Middle Earth. The movie comes out this week, I would guess that most people reading this blog read the book many years ago.
I attended a panel session of Dell executives at Dell World earlier this week. They took questions for half an hour and there were a few recurring questions or themes. One theme was - How far along is Dell on it's transformational journey?
Bilbo had never been beyond the Shire (his neighborhood in Middle Earth) and never seen or done anything like this journey before. If somebody had asked him along the way - How far along are you on your Journey? I suspect that his answer would have been specific to his own expectations and experience thus far into his journey. When he set out he only knew vaguely was he was going to do. Only near the end would have appreciated how far he had to go when he first left.
Michael Dell answered these questions about Dell's (the company) journey as well as could be expected. He basically said that Dell is in a continual state of change and that their transformational journey is a constant. He said that they are embracing change and will continue to change over time. They realize that the endpoint is not clear while they know that they must do many things to get there.
After Bilbo's journey is complete he writes a book about his adventure and calls it There and Back Again. No doubt a completely changed person from his Journey, Bilbo writes about the great things he saw and did. The key events in his life are what happened on this adventure and he wouldn't be the same person without these experiences. He also couldn't have even started on the book until the Journey was over, he wouldn't have had the context to understand what was the beginning, middle, and end.
Dell has entered into a key phase in it's corporate life. It is working to become more than just a maker of hardware. At the end of this phase it will still most likely make hardware in many forms, but it will also be other aspects of the IT business. It will be interesting to see how it turns out.
Seems like some people at Dell World had the Hobbit on their minds this week.
I've always been impressed with Jeff Clarke of Dell for a variety of reasons. He is an organically grown Dell executive. He started out as an entry level guy and has worked his way up to reporting directly to Michael Dell. He always seemed to give talks and presentations that were straightforward and easy to understand. He also would answer questions directly. He is one of the longest tenured Dell executives and understands the company and it's history very well.
So what's up with the jeans?
A distinguished panel of executives was presented before the press at the kick-off of DellWorld 2012 this afternoon in Austin. Michael Dell was of course there as the CEO, Chairman of the Board, and former wunderkind who started a multi-billion dollar business from his dorm room.
Steve Felice was kind of the moderator of the panel and served to kick things off and introduce everyone. Steve is a long time Dell executive that has essentially been successful at all of his posts at the company for the past decade.
The new guys in the form of Marius Haas and John Swainson were on stage as well as Suresh Vaswani who has been with Dell for about 5 years but was just last week moved to be the head of Dell Services in the wake of Steve Shuckenbrock's announced departure.
All of these guys are wearing sports coats, button down shirts, and slacks. Jeff Clarke is also on stage, but he is wearing jeans and a casual button down blue checkered shirt. Very relaxed.
What's the message with the jeans?
He definitely stood out from the panel on the stage. Everybody will remember Jeff Clarke, Michael Dell, and some other guys in suits. The reason for the jeans and relaxed shirt is probably at partly that he just like to wear that kind of stuff, but I think he also knows that he it puts him at an advantage.
The Q and A session goes by and nobody asks about the jeans. Too bad.
The thing about storage is that it has been virtual for a long time. So while the future is even more virtual, the past is virtual too. When we look back we see a lot of the same virtualization ideas already in place. It's kinda like Back To The Future in some ways, without the hover boards and DeLoreans.
Your desktop at home and your laptop at work are probably still more or less directly using their hard drives without a virtual layer in between, but on servers, storage arrays, filers, cloud backup, and just about any web app - storage is already virtual.
Storage has been virtual for a long time. A simple LUN is a virtual layer that takes many disks and presents them as if they were one. A LUN could also be a slice of many disks presented as one single disk. Sometimes these LUNs are referred to as Virtual Disks - depending on the vendor or tool.
Storage arrays have engaged in doing virtual stuff that does amazing things for a long time now. Moving data around to different disks without interrupting access (kinda like vMotion for data) and replicating to a different location for backup and disaster recovery without impact to the applications are great examples.
Finally all the cloud based storage is completely virtual. Amazon's storage service is probably the best example but other things like Mozy and Carbonite are cloud based backup solutions that are big virtual backup tapes in the sky.
The virtualization of storage keeps going from there. The storage functions and features are going to increasingly become more virtual. In some cases vSphere does things that were previously done by storage arrays like snapshots and cloning while in other ways the arrays will work directly with vSphere to do these same things from a VM specific perspective. The VAAI api that enables arrays to work more directly with vSphere is a great example.
So in the future, storage will be more virtual. There are two examples of things that have already been released that I think are signs of where things will go.
The first is the concept of taking local storage on servers and combining it across the network to create virtual shared storage. The vSphere Storage Appliance or VSA is an example. It was originally released with vSphere 5.0 and allows for the local storage of several servers to be combined and presented as a set of NFS shares that are highly available due to replication between the hosts that happens automatically in the background. At VMworld earlier this year there were also a technology preview uses local disks to create a virtual SAN.
Secondly is the idea of the storage arrays themselves becoming virtual machines. NetApp released a virtual storage appliance that can be used to create a virtual NetApp filer. It runs the same OnTap software that the physical based NetApp filers use. As a result this virtual filer has the same high end capabilities including stuff like snapshots and deduplication as well as the ability to NFS and iSCSI based connections.
So storage becomes more virtual and virtualization get more storage capabilities, again.
The stuff that I have been working on for most of this year was made public in a press release just before the Thanksgiving break a couple of weeks ago. SAP's HANA database, an in-memory high performance wonder, has been officially blessed to run on VMware vSphere 5.1 for test and development by SAP.
I wrote the official VMware blog that was posted on the same day as the press release and it does a pretty good job of covering the basics of the announcement.
An additional really cool thing that happened was HP announced on the same day their plans to ship a HANA appliance based on vSphere and loaded with HANA VMs soon.
My specific role in this has been pretty interesting. I've been working with SAP to test the performance of HANA on vSphere. The goal was to show that HANA in a vSphere VM could perform almost as good as HANA on native hardware. By running lots of tests with several different workloads over many months we were able to show that performance was pretty good. I'll be able to share more details on the performance soon.
I'm continuing the work on performance testing HANA on vSphere to cover more scenarios and some new workloads. Should be lots of fun.
Storage has finally reached an inflection point over the past year where the speed of the average storage device is beginning to go up dramatically. For many moons we saw the capacity and associated cost per GB improved dramatically every year, but the actual raw performance only improved incrementally.
Speed is the future.
You can see this in both the consumer and enterprise spheres.
Many laptops and tablets now ship with only SSD as an option. It is only in the lower end of the market where you still spinning disks as an option. The performance improvement of your new notebook over the old one is mostly due to the 5400 rpm spinning disk being replaced by an SSD. Sure that nice new Intel i Series processor helps - but it's not the key to great performance for your shiny new laptop.
Enterprise class storage on servers and in storage arrays are making use of SSDs extensively. They are used for huge caches, high performance data partitions, and specialized high speed storage companies and arrays use avoid spinning disks entirely. The increased performance of SSDs at the reasonable prices of today make them very compelling. Many storage arrays are constrained in terms of IOPS performance before capacity becomes an issue. This means that adding some SSDs of lower capacity but much higher performance can re-balance the equation. The larger capacity can be put to use with good performance.
In addition to SSD we also have the PCI flash based storage that is even faster.
These trends show clearly that spinning disks are going to be like tape drives soon. They will still be around, but mostly as a large capacity, slow storage option.
What we see today with the high water benchmarks of 1 million IOPS on high end storage arrays will be common performance in all data centers. The hundreds of thousands of IOPS possible on workstation class flash storage today will be routine on laptops, desktops, and tablets. It will happen faster than we expect.
It's amazing how fast things change. Industry leaders can quickly become laggards. Those that are thrown aside can return with a vengeance. New technologies are brought to market by one company, popularized by another, and made profitable by yet others. Partnerships and mergers dramatically change the landscape and define who is a strategic partner and who is a mortal enemy.
It's a Game of Thrones.
HP and Dell were once battling each other for the crown of PCs. They had surging stock prices and strong alliances. Now they are both working very hard to establish and expand in other areas.
HP is a long time stalwart of Silicon Valley and has been up and down many times. It will likely return yet again as a leader based on its history and extensive ties in the Valley.
Dell is a Kingdom born out of the x86 world that was brash in it's younger days but has now matured is trying to figure out how to get out of its first really big down cycle.
The Microsoft / Intel alliance was once one of the most powerful combinations, but has suffered from assaults on all fronts. The alliance is still there but both parties have new alliances that are conflicting with the others interests. Intel provides chips for Apple. Microsoft has a new version of Windows for ARM processors.
Apple is one of the best tales. It's king was banished. Apple fell and almost disappeared entirely. But the old king had created new kingdoms and was finally asked to return. He makes all the right decisions, including an almost unbelievable alliance with Microsoft, and Apple becomes greater than it every was before.
There is the old bear IBM. It has been around since the beginning of time. It has splintered off many companies and consumed many companies. It has invented technology and markets. It is has gone through many ups and downs.
And this is just a small region called Hardware. Winter is coming...
Why does everybody want to hang out at the water tower? Well, maybe not exactly at the water tower, but near it.
It's because everybody wants to have running water in their homes. Or as we might call it using the lingo of the industry today WaaS (Water as a Service). This week Kong and I went to a "famous" water tower to talk about infrastructure in a new So Say SMEs episode.
We don't have anything as good as Kelso falling off the water tower, but around the 7:30 mark in the video Kong gets pretty close.
There is a big competition going on all over the place. Kong and I talk about the Cowboys and Seahawks matchup this weekend in the NFL and the new VMware release vs new Microsoft release. We took So Say SMEs on the road again this week and recorded at a local high school football stadium known as the Palace on Parmer.
This week we took So Say SMEs on location to the Dell Diamond.
This is the first in a series of remote locations for our little video show. We talked about the big Stuff from VMworld including the disappearance of vRAM licensing and CEO news.
I sometimes get questions from customers or partners about the capabilities of our VMs to be able to handle workloads that have a large amount of disk IO. Last year we published the results of a 1 million IOPS from a single vSphere 5.0 host. I talked to Chethan who was involved in that testing in one of the VROOM! Videos earlier this year.
To further show that high IOPS can be achieved with vSphere we just recently published the results from a new test with a vSphere 5.1 host running a single VM that was able to exceed 1 million IOPS. There is a short blog on VROOM! with a video that shows IOMeter running during the test and 1 million plus IOPs.
It is now much easier to and simpler to get and use our products together with vCloud Suite.
VMworld sessions fill up fast, so show up early. There are a lot of people here.
The social media surrounding the event and embedded into the event continues to get better every year. The VMworld android app is much better than previous years. The socialcast VMworld 2012 app is very cool. Tweetups and Tweetvites and the way that events are organized.
Overall, everyone I've talked to has been impressed with new CEO for VMware Pat Gelsinger. I thought it pretty impressive that after only a few weeks on the job he was able to get up in front of all his new customers and talk about things in fairly compreshensive manner.
The recent quarterly reports from Dell and HP show a sharp decline in the traditional PC business. This article from Larry Dignan at ZDnet summarizes these results and includes statements from both companies regarding said results.
Dell reported a 22% decline in revenue for it's consumer based PC sales from 3.3 Billion to 2.6 Billion from the same time last year (It's actually worse than this because it achieved a 3.7 billion quarter in between these - which is a drop of over 1 Billion dollars in revenue). Dell explains that they have decided to not participate in the low end of the market - which accounts for some of the drop.
HP reported a 10% decline year over year for it's personal systems group from 9.6 Billion to 8.6 Billion as compared to the same time last year, which is better than Dell in terms of percentage but is still a decline in revenue of a Billion dollars. The groups and numbers are broken out is a bit different between the two companies - so HP's strictly consumer PC business could be a bit different than these numbers show.
The real reason for these big holes in revenue is the failure of both of these traditional PC companies to successfully move into the post PC era. Both made attempts at tablets / phones but were unsuccessful. If those attempts had been successful there would not be nearly such huge holes in their balance sheets.
The Dell Streak and HP TouchPad are the most high profile examples of the product misses by the two companies. Both companies have recognized that new products are needed and reacted by attempting to create radically different products from their traditional PC business, but have so far failed to execute successfully.
The Post PC Era is here. If you are only selling traditional PCs you are going to miss out on a large portion of the market.
Apple, Samsung, Motorola (Google), HTC, Asus, and many more smaller companies are rapidly producing new innovative products that are beginning to replace PCs for some people. The numbers are showing this is happening. (Especially if you look at Apple's iPad numbers).
Kong and I have also talked about preparing for VMworld on the last two episodes of So Say SMEs.
But when the whole things start rolling on Sunday night, and then the conference really officially starts on Monday morning, the best plans only go so far. So don't get too caught up in planning and just be prepared to go with the flow (which means cram in as many sessions, blogs, tweets, meet-ups, beers, videos, and expo surfing as possible).
I took a trip back in time last week. It was amazing. I was going on a trip and arrived at the airport just 45 minutes before my flight was to depart. I went straight to the American Airlines ticket counter and checked in. There was a problem with my seats on the second flight, but the American Airlines person at the counter noticed it before me and was able to fix our seats so that they were together before I even asked about it.
We then went through a very quick and painless security check and sat down and waited for about 20 minutes before the flight boarded.
It felt like what flying was like 15 years ago.
The reason is that we were flying out of Roswell, New Mexico. There is only one terminal and one gate. There are three flights a day and they all go to Dallas/Ft. Worth. It's a very simple operation, but it is well run and all of the staff was very nice and professional.
There used to be a very big base in Roswell, so runways are nice and big. Some of the biggest airplanes the Air Force had were flown from this airport in the past - B52s and B37s included. It is still used for some military flights as I saw a large military cargo plane taking off just before us, but there are also now the commercial American Airlines flights and private aircraft using the airport as well.
My driver's license was about to expire, so the state of Texas sent me a nice letter letting me know it was time to renew. No problem. I jumped on the website and went to renew my license remotely from the comfort of my home. No lines and no hassle. After putting in a bit of information I find out that I have to - gasp - go the the DMV office in person to renew. And stand in line.
I think that the last time I renewed it person was at least 10 years ago, maybe longer. I know that I have done at least one renewal online and maybe two other times by snail mail and a paper check.
I look at my picture from my old license and I realize that is the reason I must go in person. I'm not that young kid in the picture anymore. Wouldn't it be cool if the DMV could somehow get a new verified picture over the web. While this would be really cool - it's actually a lot easier for them to just require you to show up in person once every decade or so. If they had a Software Defined Verified Picture As A Service - SDVPaaS - that would be awesome.
The day before my birthday (which is when my license was set to expire) I got up early and was at the DMV a little bit before it opened at 8 am. There was a line of about 40 people, which was mostly teenagers looking to take their driving test.
I waited in line for about 40 minutes. I smiled for the camera and paid the renewal fee. Almost painless.
Maybe in 10 years they will have some new tech that will save me from another trip to the DMV.
Remember way back, a long time ago, when people were afraid to run Oracle RAC virtualized and Oracle didn't want to support it? It's been supported now for almost 2 years and I have talked to lots of customers that are doing it without issue or support concerns.
We are out of the dark ages and into the light. Oracle RAC administrators can get the same awesome features and benefits as their other IT brethren.
I think that the question of performance for really really BIG Oracle RAC databases on VMware vSphere might still be lingering for some. They might be thinking that their Oracle RAC database is too big for vSphere.....
I was fortunate to work with some really great Oracle experts at EMC IT over the past year. They run one of the largest Oracle RAC instances in the world and they wanted to know how it would perform as a set of VMs. We embarked on a test that used a copy of their production database and Load Runner scripts to simulate their environment. We used big 32 vCPU Monster VMs for the RAC nodes and compared the performance against their existing physical Oracle RAC cluster.
The doubts should now be completely gone. It is possible for your Oracle RAC database (yes, the one that you think is too large / too important) to leave the dark ages too.
One of the things that I really like about esxtop, is that the batch mode output file can be loaded into windows performance monitor aka perfmon. There are some things about the way that perfmon allows you to quickly select different counter sets and zoom in on specific time segments of data that make it the best tool to use in some cases.
I recently found myself working with some other data that was in a plain text format, but would be nice to have in a CSV format. I also decided that it would be nice if it would load into perfmon as well, just to make things really nice.
It turns out that perfmon won't accept just any old CSV file, but is expecting a very specific format. I used some esxtop output files as examples and then was able to construct the key elements needed to create a CSV file that will load into windows perfmon. I could not find this information anywhere else, so I thought that I would share it here for anybody who is interested.
All values must be separated by commas.
All values must be enclosed by double quotes.
The very first value on the first line is the format definition. I just copied the one used by esxtop.
The first line of values defines how the counters will be displayed and oraganized for selection in perfmon. It follows this format \\<hostname>\<GroupCounterName>(<CounterName>)\<DataValueName>
The first line must have the same number of values as the number of columns (in other words - each column of data must have a label).
The first column for each row after the first row is the date / time stamp. It must be of the format MM/DD/YYYY HH:MM:SS
Here is a sample from a CSV file that I created following these rules that does load successfully into perfmon:
Here is what it looks like when you add this CSV file as a source in perfmon and then go to add data counters:
So now you know the secrets to the file format needed for CSV files to work with windows perfmon. Have fun.
Pardon The Interrupt, but I'm asking for your vote to get a completely new type of session into VMworld this year.
We will have an expert panel of people from Cisco, Dell, StorSimple and VMware all together on stage to cover all VMworld topics. That's right. All. Topics.
VMworld 2012 session # 1913 - Pardon The Interrupt: VMworld Expert Panel on Everything
You might ask - "How it is possible to cover all topics in a single session?"
It's the format of the session. We will use the same format as the ESPN show Pardon The Interruption. We will have the list of topics. The panel will get to discuss each topic for a limited amount of time and when the bell goes off we move to the next topic. Pardon the Interruption does some serious and some not so serious sports topics. Pardon The Interrupt will do some serious and some not so serious VMworld topics.
Here are a couple of examples of ESPN's Pardon the Interruption. (I will not be wearing a Turkey costume at VMworld)
Now imagine this format with a great expert panel that is willing to have fun but also has the right background to cover the all topics.
Here's the people on the panel:
Marc Farley from StorSimple is a storage industry insider, author of multiple books on network storage, blogger at DataOnTop, and co-founder of InfoSmack and RUN! podcasts.
Scott Hanson is @CiscoServerGeek and has worked with x86 servers for almost 20 years and presented on topics ranging from supercomputing to systems management. Scott also is an excellent poker player (at least that's what I heard).
Kong Yang is the resident Virtualization vExpert at Dell. Kong and I co-presented at VMworld 2007 when Exchange was the Final Frontier and people were still nervous about virtualizing their domain controllers.
I'm the final member of the panel. I'm a performance engineer at VMware that tries to pull as many things as possible into my job description. I've presented at four of the last five VMworlds on a variety of topics. I blog here and on VROOM!. I'm also the host / creator of VROOM! Videos and sit next to Kong every week on on So Say SMEs video series.
In So Say SMEs last week Kong and I discussed the recent Transparent Page Sharing (TPS) enable or disable debate for Oracle databases. There is some background reading that you can do if you want the full details. There is a white paper by EMC, a blog by Michael Webster, and a blog by Scott Drummonds. I think that we covered it all in the video at a high level, but I do recommend the blogs and white paper for the real geeks.
Overall, I think it is great discussion to have and it gives everybody who is interested a bit more insight into some of the specifics of how TPS works. I agree with Scott - "Sibling bicker. But family is family."
This is CPU utilization from a 30 vCPU virtual machine. Each of the individual vCPU utilization levels is graphed. One CPU is much more heavily utilized than the others.
Yesterday at Sapphire, SAP and VMware issued a joint press release that shows how two companies can work to together. In this case, SAP and VMware announced a deeper and stronger relationship in a few different ways that results in customers being able to more easily run, license, and get support for Sybase databases running on VMware vSphere.
There are three major components of the announcement: Performance, Support, and Licensing.
We did some joint testing to measure performance and verify compatibility. We had no issues with compatibility and the performance was excellent. The performance difference between virtual and physical for a Sybase ASE database was within a small percentage on average.
Sybase does not require that problems be reproduced on a non-virtualized setup to get support. They will support their databases running on vSphere.
There is also the ability to license Sybase based on the number of virtual CPUs being used by the VM instead of having to license for all of the processors in the physical host. This is referred to as sub-capacity licensing.
The deadline for submitting a presentation for VMworld this year is one week from today. I have been fortunate over to get talks accepted in the past and I'm submitting a few this year and hope to be back again. It is a great experience to get to talk about some of the things that I work on, get to meet lots of other geeks, and learn new things throughout the week.
The last few years there have been thousands of submissions for only a small number of speaking slots. The committee that picks which are accepted has a more difficult job each year. Kong and I discussed some of the strategies and things to consider when submitting your abstract for VMworld in this episode of So Say SMEs -
Only thing is certain - If you don't submit an idea, it won't be accepted. If you do submit an idea, you have a chance.
I got the opportunity to speak at the Dell Tech Center Users Group today. They even let me pick the topic. So I wanted something that would be really interesting and exciting. I ended up picking a topic that I love to talk about - NUMA Performance.
The only problem was that I needed a way to make it exciting. I basically covered a lot the same information that I covered in my earlier blog post on this topic. This included how effective vSphere is with managing VMs with respect to NUMA and how vSphere 5 has vNUMA so that even large monster VMs can benefit from NUMA.
While that was pretty exciting, I added a section at the end where I covered two specific case studies. These were examples of how I had used NUMA and vNUMA to reach excellent performance with an SAP workload and a large Oracle RAC workload.
Again, I felt that this was pretty exciting stuff. But something was missing. And then I had it. Just say it in the title of the talk - Exciting NUMA Performance Information: TechTalk and Discussion.
It was a fun talk and I even got a few questions along the way. Thanks to everybody who came out.
Over the past couple of years, several of the projects that I have worked on involved me working directly with people all over the world in a real-time basis. One project from last year involved some testing with SAP and we would have Germany, India, and Texas (me) all working directly together.
The technology that makes this possible is the same technology that allows me to work from home occasionally. The key is the Internet (Thanks Al!) that provides the backbone for worldwide high speed data network we all use. On top of this we have email, inexpensive VOIP based communication, and secure VPN based access to internal networks. We also have webex, live meeting, adobe connect, virtual rooms, and many other virtual conferencing / online meeting software solutions to enable sharing and collaboration. The final piece is the ability to remote desktop into windows and of course ssh into Linux systems.
One aspect that I feel very fortunate about, is the common language of English. At least in the technology world it seems that English is the default common language. It's a good thing that Texan is so close to English that most people can still understand me.
The only difficulty is being able to keep all of the time zones straight. I've found that the easiest way to do this is simply ask Google "what time is in it <insert place of interest here>".
I think that these occasional projects will become more of the norm, where people are wherever the people want to be, and work gets done by bringing them together in these globe spanning virtual teams.
My oldest son has Type-1 Diabetes. It's an honor to have become an expert in Diabetes over the past two years. I spend a lot of time learning how to better care for him, what new products are going to be released soon, and what the most current research is finding. I'm also active in our local JDRF chapter, whose mission is to find a cure for Type-1 diabetes.
I'm also a virtualization geek who works at VMware and thinks about clouds most of the day. Which is what this blog is mostly about.
So it was very cool to see these two worlds come together recently when the JDRF moved it's donor management system into the cloud. They see a compelling value of having applications cloud based and so have moved this aspect of their business onto a salesforce.com cloud based application.
I'm personally excited to see that the JDRF will be able to spend more of it's time and resources on it mission and less time on managing their IT infrastructure.
I've been fortunate enough to work with lots of great technology at several different companies. I've also had the opportunity to work directly with customers. It's interesting how the same basic phrases are used to hearld each new wave of concepts and products.
"It only takes 5 minutes to setup" and "It just does a simple scan of your environment" are essentially interchangeable. I often hear new products described that they simple and easy to use. While this is sometimes true with consumer oriented products, software that is targeted for large enterprises will always require some setup and configuration that takes more than five minutes. Usually much more than five minutes. And simple scans of environments only happen if you have a simple environment.
"We are going to move "100%" of our systems onto the new spiffy new latest greatest stuff this year." This can actually happen in new or small businesses. But in most cases it is hard to move everything onto new platforms quickly enough that everything has been moved before some other new shift starts to occur. I talked to many customers who were moving towards 100% virtualized data centers a few years ago. Now everybody is trying to figure out how to make cloud a part of their operations. While this is still virtualized, I don't think it's what they had in mind when they started moving to virtualization. Before anybody is able to get all of their infrastructure into a cloud mode of operation, there will likely be further evolution to start moving towards.
"Our new system performs 10x better." As a performance guy I get nervous when I hear really grand performance claims. There is usually some truth to claims like this, but it often is not the type of performance you might assume they were talking about. Is the performance being compared to an older system or a current generation system? Is the thing that is being measured a performance metric that anybody cares about? Could anybody using it perceive this performance gain? Are we talking about really small numbers?
The reality is that most businesses have lots of different systems running on different types of hardware and different applications that are based on different technologies. I think that we as an industry are getting better at this stuff. I know that I am biased, but I think that VMware has done a lot to make things easier, more manageable, and more flexible.
But I doubt that really anything will ever allow you to move 100% of your systems in only five minutes and provide 10x performance gain - although I do admit it would be really cool if it did.
We finished the first season of VROOM! videos last week. This initial set of five videos is provided here in a single blog post. Because each video is only about 8 minutes long you could watch them all during lunch!
The idea with these videos was to keep things informal, fun, and give everybody an easy introduction to some of the highly technical and interesting performance work that is going on inside VMware. I think that they turned out great and welcome any comments or questions.
As a Simple weekend project I added a second wifi access point to my house. We have always had our router/wifi access point upstairs and our floor plan is pretty open, so you can get coverage anywhere in the house. It wasn't a great signal in some places downstairs,but no big deal.
Now that we have lots of wifi devices - namely an iPad and ASUS Transformer - that can play video and access complex websites the weak signal was frustrating at times. It turns out that it was easy to extend our wireless network.
I took my old Linksys wrtg54 router out of storage and connected it to the wired network connection downstairs that I was using for a media PC. I then connected the media PC into one of the ports on the Linksys. Using the media PC, I browsed to the admin interface for the Linksys. I disabled its dhcp server, because this was being handled by the existing router upstairs. I also set the Linksys to a static IP address that was not in the dhcp range and did not conflict with the existing router. I also changed the said to be different from the upstairs, so I could easily Tell which one I was connected to.
Now I connect with excellent signal strength both downstairs and upstairs, which was great for watching The Masters today before the regular network coverage started. I should have done this months ago,
The VROOM! video went live on Friday March 23rd and just six days later the "Big Data" initiative is announced. I do not believe this is a coincidence. I'm also glad to have him as a viewer.
As part of the ongoing video series So Say SMEs, Kong and I discussed the concept of DevOps earlier in the year. As this is a new title or classification, we spent most of the video explaining how we understand it to be currently defined. No doubt this will evolve over time, but the core idea will stay essentially the same.
I believe that as we move forward, more apps will be developed on or involve the use of a cloud platform in some way. Traditional IT specialists need to expand their areas of expertise to become DevOps Ninjas!
A question that comes up often is should hyper-threading be enabled for a vSphere (ESX) Server?
The short answer is Yes. In all cases that I'm aware of, it has no downside and upside could be as big as a 25 percent gain in performance.
Hyper-threading allows for multiple threads to run on the same physical core or processor. On the current generation of server chips this generally means that you get twice as many logical processors or threads as you have physical cores. This does not however lead to a 2x increase in performance because these are just logical threads and not actual cores. It allows for the CPUs to be more fully utilized and squeeze out another 20% or so of performance.
The first is that an earlier version of Hyper-threading from about ten years ago didn't always provide a performance boost and in some cases did cause performance to get worse. This lead to a recommendation at the time to test performance before enabling it. Many people remember this and are still wary of Hyper-threading.
The other reason is that there are some application vendors that recommend disabling HT even on the current chips with the better implementation. While they probably have a valid reason for recommending this, it does not apply when the app is running in a VM on vSphere. I would assume that the application (or OS) is in some way not HT aware and could do a very inefficient job of scheduling, using logical threads like physical cores.
With vSphere 4 and 5, the scheduler is hyper-thread aware and makes intelligent decisions on where to schedule processes. This allows it to overcome an application or OS that is not HT aware. The scheduler will take advantage of the extra logical threads when it needs them, but will prioritize things to be running on a core all alone if possible.
In the common scenario where there are many VMs running on a single vSphere host, all of these extra threads give the scheduler lots more options and flexibility to give the VMs what they need when they need it. This leads to better performance for all of the VMs with generally a 10 to 20 percent advantage over the same system with HT disabled.
Even if you are only going to assign a number of vCPUs that is equal to the number of physical cores, you should still enable HT. The hypervisor will be able to use the extra CPU capacity from HT for the functions that it must do, without having to interrupt the VM as much as if HT was disabled.
From a performance engineering perspective, we sometimes run tests (or ask customers to run tests) without HT enabled for debugging purposes. It is sometimes easier to do performance comparisons without HT enabled and sometimes we need to get a baseline in performance without it running, but I always have it enabled when trying to get the best absolute numbers.
Something that I have been working to get going for a few months is starting this Friday. The VROOM! Virtualization Performance Show from VMware TV will have it's first broadcast this Friday at noon CST (1 EST and 10am PST). Here is the official announcement on the VMTN blog.
Each week will feature a video of a five to ten minute conversation between engineers about a performance topic. It will be like you are at lunch with a VMware performance engineer, having a conversation about what they have been working on recently.
While the video will be available later on VMware TV, we will have a live chat and stream of the video on Fridays. This will give you a chance to interact with the engineers from the video, asking questions and possibly poking fun of my performance on video.
The conversations are informal and supposed to be fun. I record them in a single take and the questions and answers are not rehearsed. We only decide the topic and general areas to cover before pressing record.
The first episode is with me talking to Josh Schnee (@jpschnee) about VMmark, which he has worked on for the past few years at VMware.
I've started to write this blog several times over the past month, but have been stopped each time because I didn't have the right approach. The challenge is that, like a lot of questions, the answer is "it depends."
I'm going to try to give as specific of an it depends answer as possible to the question "What is the best way to configure or use vNUMA to make sure I get the best performance?"
You already know that the answer is "it depends", and so I will provide some of the reasons as to why it depends.
NUMA architecture places memory with each CPU on a system to create NUMA nodes. Each of today's CPUs have multiple cores which results in a NUMA node with a given number of cores and RAM. For example, Nehalem-EX systems have four sockets each with 8 cores for a total of 32 cores. If this system had 256GB of RAM total, it would mean that each socket had 64GB of RAM.
The way to get the best overall performance from the system would be have multiple VMs that each fit neatly within the NUMA nodes on the host. The ESX scheduler will automatically place the VMs and their associated memory each on a NUMA node in this case. In the case where the workload fits within a VM that is equal to or smaller than a NUMA node, the only thing that has to be NUMA aware is ESX. This diagram shows four VMs each with 8vCPUs on a Nehalem-EX system, each fitting nicely into it's own NUMA node.
What if your VM needs to be bigger than a NUMA node? One of the great new features in vSphere 5 is vNUMA or the ability for NUMA to be presented inside the VM to the guest OS. With the new "Monster VMs" of up to 32 vCPUs this capability really comes in handy. It allows for the guest to be aware of the NUMA on the host and make intelligent decisions about memory usage and process scheduling.
Without vNUMA, the OS and apps are not aware of the NUMA architecture and will just treat the vCPUs and vRAM as one big pool and assign memory and processes. This will result in something like this:
But with vNUMA the VM can present the underlying NUMA architecture inside the VM. This allows for the OS and application to make better decisions. If the the application is NUMA aware and does a good job of optimizing things it can end up looking something like this:
The reason that I can say it will only look something like this, is that it really greatly depends on the application's NUMA support and implementation. vNUMA is only an enablement of NUMA inside the guest. Some applications can be written in a way to easily take advantage of NUMA and keep everything nice and neat on seperate nodes. Some applications simply don't allow for this due to the way they need to function, so there could still be lots of non-local memory accesses for processes. This is true for both physical and virtual environments.
To summarize - If your VM will fit in a single NUMA node then you don't need vNUMA. If it needs to be bigger than a NUMA node, then you should most likely configure vNUMA to match the underlying hardware NUMA as makes sense. You will see a performance improvement with vNUMA if your application is NUMA aware (this is the it depends part).
I've always liked movies and books that have a what if scenario built into the story. The most famous is probably the Planet of the Apes movies. Charleton Heston plays an out of place and time human living in an upside down world where apes have evolved to control and own the world. Humans are the the ones living in the jungle and being used as lab test subjects. It asks the question - What if humans and apes switched places?
Here's a key scene from the movie where the apes find out that Charleton Heston's character can talk for first time.
The apes are all of the sudden faced with an entirely new set of facts that could potentially change their entire world. The result is stunned silence (at least initially).
As a fun exercise Kong and I considered the question "What if virtualization had never been invented?" in So Say SMEs episode 7. We discussed some aspects of how technology would be different. It's not a drastic as if apes were ruling the Earth, but still pretty interesting.
In order for virtualization to be added into an environement for the first time, somebody has to be Charleton Heston and bring the new way of thinking the forefront. I know that a few years ago many virtualization advocates inside companies would get similar reactions as he got in the movie when suggesting to virtualize Exchange, SQL Server, or Oracle RAC. What was different was it was the "server huggers" who would say "Get Your Stinking Paws Off My Servers!". The other difference is that the virtualization guy doesn't end up on the beach looking at the ruined statue of liberty. Instead, he ends up with a better way to run and manage the data-center.
Last year was really a big year for the open source DVD Store database test kit that I co-created and co-manage with Dave Jaffe. We were talking about the DVD Store a couple of days ago and he pointed out that the DVD Store was a pretty busy place last year.
In December of 2010 (which was practically 2011) we released version 2.1 of the DVD Store which included lots of new features. Some key features were the ability to create any size DVD Store database, a driver program that can put load against multiple DVD Store databases at the same time, and some wizards that make it easier to install and get running.
Dave also put together a couple of videos that show the complete process of getting the DVD Store up and running on SQL Server and MySQL.
The DVD Store is a fun project to work on and it is great to continue to see it used across the industry by many different people in a variety of tests. Let me know if you have any questions or requests for new features.