Wednesday, December 28, 2011

An Intro So Say SMEs

At the end of 2011 I started participating in a video series called So Say SMEs with Kong Yang, who I've done lots of stuff with in the past.  The idea is to do an unscripted five to ten minute video that focuses on a single topic.  We are keeping it casual and fun but still trying to cover some key ideas for each topic.
'
We started off with a quick discussion of cloud and virtualization in episode 1. Please note that Kong is very fond of the word absolutely


In episodes 2 and 3 we covered high availability with virtualization.  Episode 2 was about the options available with VMware to do different aspects of high availability and Episode 3 covered some application / OS level stuff.  I finally remembered to wear a VMware shirt in episode 3.



 In our final episode for 2011 we each picked three predictions for 2012.  Kong accidentally introduces it as episode 5, even though it is actually only episode 4.  We are like George Lucas and release episodes out of order.


Looking forward to having lots of fun with this series next in year. 

Todd

Friday, December 16, 2011

DVD Store Adds Support for vPostgres and PostgreSQL

I am excited to announce that we have added support for the vPostgres and PostgreSQL databases to the DVD Store.  The DVD Store is an open source database test kit that was developed and is maintained by Dave Jaffe and myself.  It includes everything needed to create the DVD Store database and drive a workload against it.  This includes database build and load scripts, programs to create the load data, sample load data, client driver programs, and web tier applications.  See Dave's video for a detailed look at how to get everything installed and working.

Since the initial release of the DVD Store, in 2005, we have supported SQL Server, Oracle, and MySQL databases.  Last year we did a major enhancement that allows for the easy creation of any size test database, ability to drive multiple VMs from a single driver program, and an easy to use script that steps through things.

The release of support for PostgreSQL and vPostgres now expands the support to a new database type.  By adding support for PostgreSQL we also get support for vPostgres which is part of VMware vFabric Data Director.  This is VMware's new Database as a Service (DAAS) offering and you will now be able to use the DVD Store test application to try it out and put it under stress.

I would like to thank Jignesh Shah for his extensive contributions in getting this new PostgreSQL version completed and released.

Thursday, December 15, 2011

Front Row Seats Are The Best

I went to see the UT men's basketball team play with my 10 year old son last week.  We had seats on the first row in the upper deck.  It was a mid-week game, so the attendance was pretty light.  I told him that after about 10 minutes into the game, we could go down to the lower level and get seats that were probably only 15 or 20 rows back from the floor.  He told me that he didn't want to change seats.  The seats we had were awesome because nobody was in front of him.  He could see great.

This reminds me to keep my perspective in context. Not everybody has the same

I have run many performance tests with a wide variety of applications in a wide variety of configurations on vSphere (ESX or ESXi).  I know that performance is great.  We spend most of our time figuring out how to improve that last few percentage points, analyzing the outliers, and testing new features and versions.  I don't consider performance to be an issue, but something that can be fine tuned and improved.

When I talk with customers that are relatively new to virtualization, I am reminded that not everybody has this same perspective.  We are still working to prove to them that it does perform very well. (Which is actually a pretty cool job!)  Fortunately, our performance has improved with every version and we have lots of success stories as well.  This makes it easier to show people and convince them why virtualization is a great platform. They can get all the advantages of virtualization and maintain great performance.

It turned out that those upper deck seats on the first row were pretty great.  We could see everything really well and we had a great time watching the game together.

Todd

Tuesday, December 6, 2011

A Model for Oracle on vSphere

I've been working with some of the guys at EMC's IT group on testing Oracle RAC on vSphere 5 over the past year.  They recently published a comprehensive white paper on their deployment model for Oracle on vSphere, which was one of the results of the work that we did together.  The performance testing that we did was part of the evaluation that EMC used for Oracle on vSphere as they continue to move towards having more and more of their data centers virtualized.

The deployment model provides some great thoughts on how to evaluate which combination of components should be used for each deployment.  It takes into consideration the scalability, high availability, and relevant clustering options from both VMware and Oracle.  Good stuff.

In addition to the deployment model for Oracle on vSphere, the paper also includes some key parameters for tuning and running Oracle RAC on vSphere 5.  These were found as part of the testing we did using large 32 vCPU based VMs running a four node Oracle RAC cluster.  More good stuff.

A couple of quick things that I conclude from the white paper:

1. The deployment model clearly illustrates how Oracle RAC is becoming more and more of a corner case, that is only needed by a small percentage of databases.

2. EMC clearly believes that Oracle software is supported on VMware vSphere, and is already running a large percentage of their environment on it today.

Todd

Thursday, December 1, 2011

Official With SAP SD Benchmarks

It is rare to see official traditional benchmarks published on virtualization, but in the case of SAP it is happening.

A couple of weeks ago, HP published an SAP Three-Tier SD benchmark on vSphere 5 and an HP VirtualSystem.  This was a big system made up of 11 physical servers, hosting one "Monster" database VM and 20 application server VMs.  The result was a 2x increase in the number of users vs the previous SAP three-tier benchmark that was published on vSphere 4.1 (which was also an awesome result).

If go back a couple more months, Fujitsu published an SAP two-tier SD benchmark on vSphere 5 that used a single "Monster" VM.  The rules and definition of what a two-tier benchmark limits it to a single VM, which in vSphere 5 can now support up to 32 vCPUs.  The really interesting wrinkle in this publication was that Fujitsu also published a native two-tier SD on the same hardware and software stack.  These two official results showed that virtual performance within 6% of native.

Using the official SAP SD benchmark results the capability and performance of vSphere as a platform for SAP can be considered.  The large three-tier environment clearly shows how much vSphere has grown in terms of capability from the previous version.  The two-tier results show a clear comparison between virtual and native, with the difference being relatively small.

These official benchmarks are strong proof points for running SAP (and other important stuff) on vSphere 5.

Friday, November 11, 2011

How Rosie O'Donnell and Microsoft Conspired Against Me

It seems that in order to get to the bottom of a problem, and really understand what is going on, you have to be willing to continue to dig.  When I got to the bottom of my problem with Rosie O'Donnell, it turned out that Microsoft had assisted in a silent search and replace conspiracy.

I'm one of the creators and maintainers of the open source DVD Store database test kit and application.  Dave Jaffe and I created it for internal tests and then decided to enhance it a bit more and release it as open source in about 2005.  If you like to do database testing, you should really try it out.

As part of the data for this test database we created a program that generates fake titles and actors for the DVDs in our test database.  We take real names and then mix them up and recombine them to create all of our fake names.  We have a bunch of first names - TOM, ROSIE, BRAD - and we have a bunch of last names - CRUISE, O'DONNELL, PITT.  These are then randomly combined to create lots of names.  So we would have ROSIE CRUISE, BRAD O'DONNELL, and TOM PITT for example.

This same set of data has been in use with the DVD Store for at least 7 years, but this week I found out something new -  Not all apostrophe's are created equal.

When editing a data load file, with vi on Linux, that had our fictional actors names in it, I noticed that instead of an apostrophe we had the following O<92>DONNELL.  This was consistent throughout the file every time O'DONNELL was supposed to appear.

I looked at the same file on Windows in notepad and it appeared correctly - O'DONNELL.  The strange this was that if you used the keyboard to type a single quote (the key just to the right of the semicolon key) that the character it produced looked slightly different than the on next to the O in O'DONNELL in the file.

Why was Rosie O'Donnell causing these problems with an apostrophe that wasn't an apostrophe?

It turns out that it is Microsoft's fault.  It is a known issue for those that are into these types of things.  Microsoft will replace an entered single quote character with a "Microsoft Apostrophe" that is coded as a 92.  Microsoft will then display this as something that looks more like a real apostrophe, but at the cost of putting a non-defined character code into your text.  Here is a quote from wikipedia on the subject:

"Microsoft Windows CP1252 (sometimes incorrectly called ANSI or ISO-Latin) contains the typographic apostrophe at 0x92. Due to "smart quotes" in Microsoft software converting the ASCII apostrophe to this value, other software makers have been forced to adopt this as a de facto convention. For instance the HTML 5 standard specifies that this value is interpreted as CP1252. Some earlier non-Microsoft browsers would display a '?' for this and make web pages composed with Microsoft software somewhat hard to read."

It turns out that we initially entered the names for this data in Microsoft Word, where the silent search and replace of our apostrophe's conspired with Rosie O'Donnell against me. 

Be careful out there intrepid geeks - not all apostrophe's are created equal.

Todd

Thursday, October 27, 2011

Beginners Guide to Copenhagen Train System

I attended VMworld Europe last week, in Copenhagen, Denmark and had a great time.  Part of this was using the excellent system of trains to get around.  In the process I learned a few key insights that I wanted to share for others that might be using the trains in Copenhagen for the first time.  If others reading this are more knowledgeable please add comments (or if you find errors let me know about that too).

There are actually three different types of trains that operate in Copenhagen.  If you get a city pass, you can ride any of them while remaining in the Copenhagen area.  The three train types are the Metro, S-Train, and Regional Trains.  The Metro is run by a different company than the S-Train and regional trains, and hence they have separate websites.

The Metro currently has two lines - M1 and M2 - which both share Vanlose as the end of the line to the west, but diverge at the Christianshaven stop to end at Vestamager for M1 and Lufthavnen for M2.

The S-Train (or S-Tog in Danish) is the other commuter train system in Copenhagen.  It has what appears to be at least seven different lines, that are color coded.  There is a big "S" sign that marks which platforms are for the S-Train.

The regional trains go to areas beyond Copenhagen and is what you take to go over into Sweden or up to Helsingor for example.  This is also what I rode from the Airport to the Kobenhaven H station, and later from Kobenhaven H to Malmo C station in Sweden.

This map is the best one I found. It shows all three trains and stations.  It's from a travel blog site.  I think that it is the best because the Metro is run by a different company than the S-Tog and regional trains.  So it took an independent 3rd party to put everything together.

You can transfer between the different train types, you just need to make sure you have the proper ticket.  They do check people for tickets and they do hand out fines to people that do not have a ticket - I saw this happen twice in the week I was in Copenhagen.

There were always very helpful people in the ticket offices and on the platforms, at the big stations, that were happy to help me understand which platform and which train to take.  It was also very nice that they all spoke English.  If you aren't sure what you are doing - just ask somebody.

Todd

Wednesday, October 19, 2011

Taking a Different Look at Performance for vSphere Storage Appliance

I've been working with and doing performance testing with the new vSphere Storage Appliance (VSA) for most of this year.  The white paper was published last week and I encourage you to read it if you are interested in getting a deeper understanding of VSA.

I've also done a lot of performance work with Oracle RAC, SAP, and SQL Server this year, and there is a big difference between that and VSA testing.

When testing with big databases and the applications that go along with them, the storage requirements to achieve the needed performance levels are usually pretty high.  I've used big EMC, NetApp, HP Lefthand, and Dell Equallogic storage to support thousands of IOPs while maintaining low latency.  In these tests we are analyzing the storage usage in each test to ensure that we get the performance that we need. These big storage systems are necessary to meet the requirements of the applications being tested.

VSA is designed to be a simple to install, simple to use, and simple to manage storage solution for small environments.  It can be the first "SAN" for environments where it was not possible to have a SAN before.  It is not designed to support or be used with the types of workloads at the stress levels that I normally test with.  So approaching a performance study of VSA required something different.

Instead of trying to push as many IOPS as possible or getting as many users as possible, I focused on providing some insights into the key factors of VSA performance.  This includes a couple of test scenarios designed to show two different things.

The first used the VMmark2 workload to show that enterprise apps can run with good performance on VSA.  Specifically, it showed that Exchange, a webstore front and backend, and a Java based interactive website could all run very well, at the same time, while supporting the workload equivalent of over 1000 users.

The second test used IOBlazer (a cool and easy to use IO generator tool) to show how the VSA's Network RAID (or replication) traffic impacts the performance across the cluster as load is spread across the VSA cluster. This test illustrates how as you put load one datastore the network RAID that is ensuring the data is always available causes some load to also occur on a secondary node in the VSA cluster.  The graphs are cool in this section.

I think that most customers will probably not need the additional performance information that I have included with these scenarios because of how easy VSA is to use and manage, but it will help those that are interested in getting a more advanced understanding or want to get into a more detailed level of performance monitoring.  Most will not have need for this additional detail, but many geeks like me will be interested anyway.

So this approach is a bit different from the usual type of performance work that you see.  This isn't the million IOPS test or an Oracle RAC performance study.  It's meant for a different environment.  One that is getting it's first taste of shared storage and cool things like vMotion, DRS, and HA.

Todd

Tuesday, October 11, 2011

VMware Was Everywhere and Nowhere At Oracle Open World 2011

There was a running joke with everybody that came by the VMware booth at Oracle Open World this year.

"Wow, Oracle really doesn't like you guys!""What did you do to get back here?"  "Are they trying to hide you?" "You guys have been literally put in the corner!"

The VMware booth was literally in the back corner of the expo floor at OOW 2011.  Some of this was our fault for deciding to go with a smaller booth this year, but I think some of it may have to do with something else.

There were still lots of customers that came by and we had tons of discussion around running Oracle products on vSphere.  This year it seemed like there were more customers who were already running some or all of their Oracle databases on vSphere which was great.  I also spent some time explaining the concepts of virtualization and specifics about vSphere to customers who were new to the ideas which was also great.

From what I heard, talking to various speakers at OOW that I know, Oracle requested that all references to VMware be removed from their presentations and replaced with generic references.  But the speakers would go ahead and say "VMware" during the talk.

The final VMware highlight for me came during Larry Ellison's keynote on Wednesday afternoon.  He announced the new Oracle Cloud and gave a lengthy demo.  While announcing the Oracle Cloud he spent a fair amount of time describing why it was good.  You could take most of the statements that he made about the Oracle Cloud and put them directly into a keynote from VMware's Paul Maritz.  The Oracle vision for cloud and the VMware vision for cloud were similar in many ways.  Larry even used Paul's line about the Hotel California for proprietary clouds - VMs can check in, but they can never leave.  Larry, of course, never directly referenced VMware in his keynote, but the similar hybrid cloud strategy was definite.

So while Oracle preferred that VMware not be mentioned or seen at OOW, in reality VMware was everywhere.

Todd

Friday, September 30, 2011

No More Talk About Old Stuff at Oracle Open World

Oracle Open World 2011 starts next week, and I'll be there again this year (I think this will be the 7th one for me).  I've done lots of testing with Oracle  RAC on vSphere 5 over the last year and look forward to getting the chance to discuss it with customers, partners, and general database geeks.

One thing that has changed since OOW last year is that Oracle changed their support stance and RAC has been supported on vSphere since last November.  We still get some questions about support for Oracle on vSphere, but not nearly as many as last year.  If you want to discuss it or get your questions answered you can come by and talk to us at the VMware booth or you check the website.

An even bigger thing that is different is that VMware has begun to enter the database world.  With the announcement vFabric Data Director we now have the capability to add database as a service to your vSphere environment.  This changes and challenges many of the traditional notions about how to provision, manage, and use databases.  I think this is an incredibly interesting new development that is an example of how things are going to be changing in the future.

I think it's going to be a fun year at OOW with lots of the discussion revolving around virtualization and cloud  and hopefully less talk about support and licensing.

Todd

Friday, September 23, 2011

Five Things You Might Not Know About ESXTOP

I use ESXTOP almost daily in my job to do analysis of performance on ESX / vSphere.  In working with partners and customers I often hear them say that they didn't know ESXTOP could do that.  Here is list of five most popular:

1.  What is this "ESXTOP" that you speak of?  Many have just not heard of ESXTOP and either have not been concerned with performance previously or only used the information from the viClient's performance graphs.  To directly answer the question, ESXTOP or rESXTOP is a character based utility that runs on the ESX or ESXi host that can be used to monitor, display, and log virtually all the performance information of the ESX host.  You can simply enable the ability to ssh into the host from the troubleshooting options on the ESX host local console, and then ssh into the system and run "esxtop" to get started.  You can also use "resxtop" to run it remotely against an ESX host. Either way, ESXTOP is a tool often used at VMware to diagnose and identify performance problems.

2. There's more to see than just CPU data.  The initial screen for esxtop is very similiar to top from linux showing CPU usage information.  This is only a small part of what can be seen with esxtop.  By prssing different keys, different sets of data are displayed.  The "d" key will open the disk screen, "m" will show memory related information, "n" will show the virtual networking stats, "v" will show the virtual disks. To get a complete list of available screens just press "?" or "h".  You can also add more fields or columns of data by pressing the "f" key and then selecting the additional fields that you want to be displayed.

3. You can capture ESXTOP data in batch mode.  It seems that most people run esxtop top in it's interactive mode, where it displays all of it's great performance info in real time.  This is a fun way to run ESXTOP and and can give valuable insights in real time.  But it can be even more useful to be able to capture all of this data and be able to analyze it at a more detailed level later AND use it to create cool graphs.  By starting esxtop with the -b option and redirecting the output to a text file with a pipe, ESXTOP will create a CSV file with all of the performance data.

4. View ESXTOP data with windows perfmon and impress your friends and co-workers.  The CSV file that is output by ESXTOP can be imported into Windows perfmon.exe (Windows Performance Monitor).  This provides a good graphical interface to be able to quickly select counters and view them.  In order to import the file into perfmon, you will first need to copy it onto the windows system where you will be running perfmon.  You can use winscp or other utility of your choice to get it copied off the ESX host.  Start perfmon and then right click in the middle of the graph area and select Properties.  Then select the Source tab and click on the Add button.  Browse to the saved .CSV file and open it.  You will now to be able to add counters in perfmon that are from the data for your ESX host.

5. Carve out a small set of data from an ESXTOP data file with esxplot to make your job easier.  The CSV files that esxtop produces can be massively wide with thousands of columns.  Too wide for even the latest versions of Excel in many cases.  This can make it difficult to quickly pull out a specific performance counter to put into your spreadsheet for analysis or cool graph creation.  The easiest solution I know is to use the VMware labs fling esxplot to export a subset of the data.  You simply import the .CSV file from ESXTOP into esxplot, select and view data for counters in esxplot, and then when you ready export the data.  The result is a much smaller .CSV file that just has the columns of data that you want.

So now you are armed with lots of ESXTOP tips and tricks that will make it easier for you to analyze and view performance of ESX and it's VMs.   

Todd


Tuesday, September 20, 2011

Understanding %Ready with Monster VMs

I was doing some benchmark testing with a "Monster VM" of 24 vCPUs and 85 GB of RAM and was concerned that the %Ready CPU time for the VM was very high at 30% to 40%.  The strange thing was that performance of the VM seemed to be pretty good, with no real issues (I was of course trying to find another few percent of performance for the benchmark - but is another story).

Our guidance with %Ready has been that it should be below 5%, but this needs to be reconsidered when using large VMs.  The reason is that the %Ready you see in esxtop is the sum of %Ready for all of the individual vCPUs for that VM.  In versions prior to vSphere 5, this meant that only up to 8 vCPUs could be involved in reaching the %Ready.  With vSphere 5 and "Monster VMs" this is now up to 32 vCPUs and even small %Ready times or each individual vCPU can easily push the overall %Ready over 5%.

In general esxtop uses this addition method when showing the usage level of CPUs and it makes sense that it should be carried over to %Ready as well.  For example, when esxtop shows the CPU usage of an eight vCPU VM the maximum is 800, and for a 32 vCPU VM is is 3200.

When looking at %Ready it is important to also understand how many vCPUs are in that VM and take that into consideration.  If you expand out the stats for the VM (press "e" and then enter the GID for the VM when in esxtop) you will be able to see the %Ready for all of the individual vCPUs which should help to put things into context very quickly.

Returning to my example, it turns out that each of my 24 vCPUs was around 1 for %Ready and it did not represent a performance problem with the %Ready for the entire VM to be at 30%.

Todd

Thursday, September 15, 2011

VMworld 2011 Trip Report

I'm an engineer at VMware and sometimes get to know about things that we are working on before they come out, it is great to see how it all comes together.  That happened for me this year at VMworld.

I think that Steve Herrod's keynote really did the best job of showing where this is all going.  The scenario of the new insurance claims adjuster worked really well. If you are only going to watch one session from VMworld - this would the one I would recommend.

I have a specific interest in the new vSphere Storage Appliance (VSA) and it was great to see it covered in both keynotes.  The VSA session that I presented with the VSA product manager went really well and we had some great discussion and Q and A after the session.  The official tally was 451 people and I didn't really see many leave during the session, and nobody threw anything at us!  VSA is a great feature to bring shared storage to small environments that previously couldn't afford it or lacked the technical ability to install and configure it.  A whitepaper is in the works and I will post more on VSA in the future.

The million IOPS on a single vSphere 5 hosts gots lots of coverage.  It's really cool to see the hero numbers that Chethan produces and he did a great job working with EMC on this number.  They were able to get to 1 million IOPS very quickly because they really didn't have to do any tweaking or tunning.  They just had to get the server and the storage needed to support 1 million IOPS in place, then get the VMs setup, and run through the series of tests.  No benchmark special settings required.

I was invited to speak at the Dell TechCenter Users Group meeting that happened on Tues night over at the Wynn.  I gave a 15 minute strictly technical overview of VSA and had a great time getting to see many of my twitter friends there.  I heard that there was some video of the event.

I attended sessions throughout the week, and all of them were pretty good.  There were a couple that really stood out.  Chad Sakac gave a great session on all of the new things that EMC was doing with vSphere 5 to enable new features and better performance.  He also presented part of EMC's super session and covered lots of ground including the new vCloud Data Director Database as a Service.  Chad is a great presenter and I would recommend that you catch his sessions when possible.  In particular he has this great ability to use recorded video demos and make you almost think it is a live demo.  He really has great energy and passion and it comes across in his sessions.  He practically got a standing ovation at the end (or maybe people were just trying to get to lunch, hard to tell with geeks).

The most important part of the show is getting to meet and talk with so many people from around the industry.   There were lots of people that I only see once at year - at VMworld.  It was great to see and talk to everybody. I can't wait until next year.

Todd

Friday, September 9, 2011

A Small Part in 1 Million IOPS

VMware demonstrated the impressive storage I/O capabilities of vSphere 5 recently when a single host achieved 1 MILLION IOPS.  We can do a whole bunch of IOPS - which is really cool.  This is way more IOPS than just about any workload needs and demonstrates that storage throughput is not a problem for vSphere.

I had a small part to play in this effort. I got an email in June asking if I had a large server that I could loan out for a few weeks.  It turned out that I did have a four socket server with 512GB of RAM that we had already purchased, and had shipped, but had not yet arrived.  So as soon as it arrived at our lab, I had it immediately shipped right back out to an EMC lab the same day.  They had a deadline of just a few weeks to get everything done.

Three weeks later I got an email that it was ready to ship back, I provided our address and received the 1 million IOPS server back in my lab another week later.  I'm now using this famous 1 million IOPS server for some other tests and it will reappear in future blogs and white papers.  It kinda feels like working with a rockstar.

Todd

Thursday, September 8, 2011

Up to Speed with vFabric Data Director

When I read the press release announcing VMware vFabric Data Director, I wasn't sure this was what we had called project Aurora internally until I read paragraph eight.  It was at this point in the press release that we get into enough technical specifics that I could be sure.

This made me fear that the coolness of this new database from VMware would get lost.  This was most certainly was not the case.  Chad Sakac wrote  a great post giving some of the background around project Aurora and lots on exactly why this is such a powerful new capability.  Brent Ozar's post was from a completely different angle than you would expect (or maybe you would if you've seen him in his kilt at PASS) and first describes it as a new feature of Microsoft SQL Server Denali.  Scott Drummonds wrote another great post that I think puts the new VMware database into perspective for the database market.

The common thread among all of this is that the new vFabric Data Director is a new approach to using databases.  It opens up new ways to manage and use databases in a more cloud or service oriented way.  If you missed it from the initial announcement or it didn't make it onto your radar screen due to all of the other stuff going on during VMworld, a good way to get up to speed is to read these blog posts.

Todd

Friday, August 26, 2011

Getting Started with VSA

If you are going to be at VMworld this year, please come to the session I'm presenting with Edward Hsu on VSA.  We are to cover some basics and some in-depth goodies.  If you thinking about using VSA, this session will help you to understand exactly what VSA is and some details to help you understand how to deploy and manage it.  It is on Monday at 12:30 in Palazzo Ballroom G - VSP3299.

Here's a very fast overview of VSA:  It stands for vSphere Storage Appliance.  It takes the local disks on two or three vSphere 5 hosts and creates a "virtual SAN".  This virtual SAN enables all the advanced featrues of vSphere - vMotion, DRS, HA - to be possible with only local storage.

I've done lots of testing with it over the past four months and now that the product is finished, I can start talking about.  This session at VMworld is the starting point and it will be followed by a whitepaper and several blog articles.

Todd

Thursday, August 25, 2011

HP TouchPad Conspiracy Theory #1

I've always wanted to be the author of a conspiracy theory and I think that HP has given me the perfect opportunity.  The whole TouchPad / WebOS apparent dismantling is just a trick.  HP has actually just pulled off one of the greatest stunts of all times.  What follows is my theory of what is really going on.

HP acquired Palm and WebOS last year and began to work on a strategy to create devices that were competitive with Apple's iPhone and iPad.  The WebOS is a technically superior OS to iOS in many ways, which is an advantage, but would not guarantee success simply due to technical superiority.  HP also had the ability to basically copy the same general tablet form factor and technical specs that all the other tablets were using.  In basic terms HP was able to create a device that was similar technically to the iPad and in some ways even better.

HP realized that they also had to compete with the Apple appstore and development community.  This is not something that can be easily solved, so a few high placed executives must of come up with the secret plan that we now see unfolding.

They planned to release the Touchpad as a high end tablet that "Worked Like Nothing Else" with celebrity spokespeople and lots of ads.  This would give consumers the impression of the high value of their new tablet.    In order to spur demand and quickly sell ALL the Touchpads they could make, they would then slash prices to $99 and announce that they were discontinuing the Touchpad.  All Touchpads would sell out in days.

This part of the secret plan has been executed already, and we have seen that it was successful.  The next steps should come in the next few days.

It is my theory that HP's secret plan next calls for HP to announce that in the wake of unprecedented demand and the need to support all of it's new cusomers - They are going to keep working on tablets and WebOS.

Maybe this is just a crazy consipiracy theory, but I thnk that it makes sense.  We'll all just have to wait and see what happens over the next couple of weeks :)


Tuesday, August 23, 2011

What Are You Waiting For?

I'm feeling bold today, so here it is. There are no remaining technical barriers to virtualizing your entire data-center.  The benefits are real.  What are you waiting for?

Many many customers have virtualized all the most business critical and performance intensive apps found in datacenters.  Exchange is now commonly virtualized.  SQL Server and Oracle databases are also run as VMs on vSphere in data centers all over the world.  Even bigger more complex stuff like SAP and Oracle RAC can be run virtualized with no technical issues.  There are proven examples for just about every application type out there.  It works.

The benefits are big and from what I have seen, the more that you virtualize the bigger the benefits become.  I think that everybody is aware of the clear consolidation benefit for legacy and underused servers.  Once you have completed a consolidation effort you begin to see that the operational benefits are much bigger than what you realized before starting.  In fact I would say that these benefits are much bigger than the pure consolidation gains.

The number of workloads that cannot be virtualized due to technical barriers is very small.  This leaves issues like application licensing uncertainty, organizational politics, out dated conceptions of what virtualization can do, support questions, fear of change, and others.  Some of these have simple answers and others are more difficult, but even though they are not technical reasons they are still very real reasons.

If you were to purchase a new Intel or AMD based server today with either two or four sockets, you would be hard pressed to find an application that truly needs the whole host.  The only practical way to really utilize these nice new servers is to use virtualization to be able to run multiple workloads on it at the same time.  Otherwise you are looking at a system that is probably less than 30% utilized almost all of the time.

Todd

Friday, July 1, 2011

Calculating Back of Envelope Estimates for IOPS

I recently spent some time putting together a simple spreadsheet to help me estimate how many IOPS to expect from a given config.  I thought that I would share with the group.  Some of the key factors in the equation are pretty cut and dried, but others have to estimated.  So this equation will only give you an answer that is as good as your estimates.  This formula and blog is really designed to estimate random IOPS and doesn't talk about sequential. (Previous two sentences should be read as my disclaimer and please notice the use of the word estimate in the title and throughout the blog post).

I'm first going to explain the equation, and then I'll give it to you in short form with a few examples.

Reads / Writes and RAID types - A single read results in a single read operation, but a write results in multiple operations depending on the RAID type used.  The two mirroring RAIDs of 1 and 10 have a write cost of 2.  RAID 5 has a write cost of 4.  For this reason the equation is broken up into two halves for calculating the number of reads and the number of writes.  The weight given to reads and writes is based on an estimation of the read/write ratio of the workload (this is one of the estimates that you must come up with that makes a big difference in final number).

Disk Performance and Number of Disks - Each individual disk is capable of a given number of random I/O operations per second.  The better or faster a disk is, the more IOPS it can complete.  There are lots of sources around the web with estimates for 7200, 10K , 15K, and SSD disks.  I've always used the round number of 120 IOPS as an estimate for a 10K RPM disk for example.  The other factor here is the cache on the RAID controller or SAN processor or whatever it is called by the solution that you are using.  The size of the cache will effect the overall performance with larger caches generally allowing higher performance. Estimating the IOPS per disk is the other number that can cause a big difference in the final results that must be estimated.

So you multiply the number of disks in the RAID set by the estimate for IOPS per disk to get a total IOPS for the LUN / virtual disk / RAID group / whatever you want to call it.  You then multiply this by the percentage of reads for the read half of the equation.  For the write half you do the same, but you then have to apply the write penalty and divide by the RAID cost for the RAID type you are using.  I think about now is when you realize why RAID 5 performs so poorly in write intensive environments.

Here is the formula in a simple terms:

( (Num Disks * IOPS per Disk) * pct Reads) ) + ( ( (Num Disks * IOPS per Disk) * pct Writes) / RAID Cost)

Examples:

An example for an 8 x 10K RPM disk RAID 1/0 with 60 percent Reads and 40 percent Writes

( ( 8 * 125) * .60) + ((( 8 * 125) * .40) / 2) = 800 IOPS

same but change estimate of IOPS per disk to 150

( ( 8 * 150) * .60) + ((( 8 * 150) * .40) / 2) = 960 IOPS

And now change to RAID 5 with cost of 4:

( ( 8 * 150) * .60) + ((( 8 * 150) * .40) / 4) = 840 IOPS

And finally RAID 5 with 85% writes:

( ( 8 * 150) * .15) + ((( 8 * 150) * .85) / 4) = 435 IOPS

Special thanks to Scott Lowe for his excellent blog on essentially the same topic last year.

Wednesday, June 22, 2011

An Announcement About An Announcement

Today there is an announcement that you can sign up to hear an announcement from VMware that will happen on July 12.  While this does seem silly, this is what everybody does these days with kinda vague invitations to hear about the next great product or thing or concept.  There really are lots of announcements about announcements. I guess that me blogging about adds even one more level.

It really reminds me of when I first realized how important it is to have small meetings before a big meeting to make sure that nobody was surprised and the big meeting went as expected.

Even though I'm having a little bit of fun about the state of affairs around these announcements, I must admit that I fall for it just about every time.  I like the anticipation of hearing about an announcement a few weeks or days before and then looking forward to what will be announced.  And that is true in this case as well, I'm looking forward to July 12th to see and hear the VMware announcement.


Todd

Monday, May 9, 2011

Be Cool and Know Geeky Facts - ESX 4.1 Maximums

It always fun when you work on projects and configurations that push the limits of what is possible.  This isn't actually true, as sometimes when you are on the bleeding edge you do actually bleed.  Hopefully you don't bleed for very long and then you are back to having fun.

Anyway,  I recently had to consult the vSphere 4.1 Maximums doc again to check the max number of virtual SCSI devices per VM and maximum number of LUNs per ESX host.  These are big numbers and most configurations do not come close to these limits.  It is an interesting document to look through to see what the actual limits are for all the various VM and ESX categories.  Besides, you could be the cool at the party and know what that you can have up to 10 virtual NICs per VM on ESX 4.1 (among other great things!)

While not all maximum values increase with each new release of vSphere, many do so be sure to get a fresh copy of this doc after a new release of vSphere / ESX.

An interesting side note, is that the published maximums are all tested values.  They are not theoretical numbers based on the way the code is written, but actual tested configurations.

Todd

Thursday, February 24, 2011

Big VMs Aren't What They Used To Be

I've had the privilege to work on some of the current generation Intel Westmere and Nehalem EX servers over the past few months.  This testing has focused on Oracle RAC (which has been pretty cool in lots of ways) and has involved what I would have considered to be very large VMs not so long ago.  My typical RAC node for testing is usually 4 or 8 vCPUs with 32 GB of RAM, and it turns out these are moderate or small in size.

When the server has 32 cores and 256 GB of RAM, a VM with only 8 vCPU and 32 GB is relatively small.  In fact I'm often running four of them at the same time and still find that performance is great.  Additionally, I'm using vMotion to move these VMs in a matter of minutes from one server to another.

I'm using these 8 vCPU VMs with 32GB of RAM the same way that I used 2 vCPU VMs with 8GB a few years ago.  They are no longer big VMs.  Can't wait to see what we are doing in a few more years.

Tuesday, February 22, 2011

Benchmarking Private and Public Clouds

Earlier today I ran across a test that compared the performance of a private cloud against several public clouds.  The tests were done by CloudHarmony who appears to mostly focus on running tests against public clouds.  They have a variety of tests that seem to be pretty practical and reasonable measuring CPU, disk, memory, encryption, and something referred to as programming language performance (I thought this is what I was given grades for in CS class...).

According to their blog, CloudHarmony agreed to work with KT, a large Korean telecom and do some testing against their private cloud they had recently finished building.  CloudHarmony then compared the results against the many public clouds that they have tested.  In their conclusion they found that the KT private cloud performed very well in comparison to the public clouds.

I find this interesting as it a key aspect of what everybody is going to have to consider as they begin to adopt public clouds.  When does it make sense to use public vs private?  Performance is actually fairly easy to characterize if you can get meaningful tests.  More difficult will be understanding the costs and risks with doing either private or public clouds. I think that for many organizations, it will make sense to use both.  The organizations that get the mix of public and private right will have a competitive advantage, enabling their business to run more efficiently while maintaining agility.

Friday, February 18, 2011

Everybody is Talking About Oracle RAC on vSphere

The momentum and credibility of running Oracle RAC virtualized on VMware vSphere is building and more is being published.  The most recent example is a great whitepaper by EMC that focuses on using their FAST VP Automated Tiering with a virtualized Oracle RAC on vSphere 4.1.

Because the paper is an EMC paper it naturally emphasizes their storage technology.  It's interesting that they decided to use Oracle RAC on vSphere as the stack to highlight their FAST VP.  They could have picked just about anything, but I'm guessing that they used Oracle RAC on vSphere because that's what customers are asking about right now.  This gives two reasons to check it out.  And even if you are a NetApp, EqualLogic or 3Par guy, you might still want to recommend this paper just for the Oracle RAC aspect - or maybe not - but the EMC guys will definitely like it!

I'm not going to get into the details of the paper, but it does include a physcial vs virtual comparison with a very small delta in performance.  Very interesting stuff.

Sunday, February 6, 2011

Favorite Five From Texas

After reading Scott's blog a few weeks ago about his favorite albums, I've been thinking about my own list.  I decided I couldn't do a list of albums and instead of have just decided to list my five favorite Texas musicians.  This list is not in any order.

The Eagles - The Hotel California album is great.  Don Henley is from Texas and I went to the University of North Texas which is the same university that I attended.

Norah Jones - Her distinctive voice and smooth music make it easy to listen to her songs over and over.  We have all of her albums and have them playing in the house often.  Norah Jones also is from Texas and attended UNT.

Jerry Jeff Walker - I have seen Jerry Jeff play all over Texas and I can't wait to see him again this year.  He is most known for Mr. Bojangles, which has been covered by a ton of other artists, but is really at his best on songs like Sangria WineLondon Home Sick Blues, LA Freeway, and Contrary to Ordinary.  Jerry isn't from Texas, but has lived in Austin for over 30 years and much of his music is about Texas.

Stevie Ray Vaughn - The best blues guitarist of all time is from Austin where there is a statue of him near downtown on Lake Ladybird.  His song Texas Flood is a great example of what his music is all about.

Willie Nelson - Willie is one of the greatest song writers and performers over the past 50 years.  I got to meet Willie at a radio station here in Austin about 8 years ago and saw him perform Angel Flying Too Close To The Ground in a very small room.  Willie is simply incredible and has more great songs than I can link to.

You might be able to argue that there are others that deserve to be on the list, there are several more that I wanted to include, but you would have to agree that all of these are pretty good.

Friday, February 4, 2011

Summer Internships at VMware

I attended the career fair at the University of Texas near downtown Austin yesterday.  I was there as part an engineer in the VMware booth to talk to students about opportunities to work at VMware.  We were primarily looking for interns for this coming summer where there are over 100 positions available but also looking for new college graduates as well.  So if you are a student and looking for an internship this summer (or you know a great student candidate), check out our website and submit your resume.

I haven't had the opportunity to attend a career fair in a long time, and I found it really interesting.  It was great to get to talk to so many bright and talented students and think that many of them could end up working at VMware this summer.  After talking with thirty students over the course of a few hours it becomes clear what makes one stand out in comparison to another.

Students that have done significant projects outside of their normal course work or have completed internships that allowed them to gain skills and experience have a big advantage.  Generally these students tend to be seniors or post graduate students, but in a couple of cases I talked to first and second year students that had done some really cool personal projects.

The other aspect that makes somebody stand out is confidence and the ability to speak with certainty about the details of the things that they have done or experienced.  This seems obvious to me after having been in the work force for some time, but looking back on when I was looking for my first internship and job I probably didn't understand how important this is.  Go ahead and speak with confidence about what you do know (without going to far and claiming you know things that you don't - as this will get you in real trouble!).

Todd

Wednesday, February 2, 2011

Hitting Multiple Targets Like Bruce Lee with DVD Store 2.1

Bruce Lee was so fast that he could deliver punches and kicks to many bad guys at the same time.  As a matter of fact he would have been bored if he was only hitting one guy at a time.

DVD Store 2.1 now has a driver program that can deliver requests to multiple databases at the same time.  This allows for a single driver program to push load against multiple DVD Store databases that are running on different hosts.  The most common reason to this is needed was to be able to run load against a number of virtual machines all at the same time.  Prior to this new version, we ran load against multiple VMs by having multiple driver programs running.  This worked fine, but it required some scripting to synchronize starting and it also required some work after the run was over to get the total orders and average response time across all the VMs.

The new "Bruce Lee" enabled driver program runs against multiple databases and reports the stats on a per target basis as well as totals.  This makes it much easier to do tests against multiple DVD Store databases and get the summarized results.

The multiple targets are specified using the same parameter that was used for single targets, but you can now specify more than one by listing them with semicolons.  An example looks like this:

ds2oracledriver.exe --db_size=20GB --n_threads=4 --target=host1;host2;host3;host4

One interesting point is the number of threads that will be run is specified on a per target basis.  So in the above example 4 threads will be run against each host, resulting in 16 threads total.

The new DVD Store 2.1 has many new enhancements, including things like custom size databases, that I'm writing about in this series of blogs.  More to come in the following weeks.

Wednesday, January 26, 2011

DVD Store 2.1 and Custom Size Databases

The biggest new feature in DVD Store 2.1 is easily the ability to create any size database.  This was a widely requested feature for years and I'm really glad that we were able to get it into this update.

First some background as to what this means.  When we initially created the DVD Store we provided separate scripts that allowed for the creation of a Small, Medium, or Large database which corresponded with 10MB, 1GB, and 100GB.  It was possible to take these scripts and modify them to create a different size database, but you really had to know the DVD Store pretty well to get all the needed changes made.  Despite this, we had several users who were able to create some different sizes and publish results with them.

In DS21 we have created a perl script (InstallDVDStore.pl ) that asks a series of questions and then generates a set of database creation scripts and load files.  The first question it asks is how large do you want the database to be.  It then uses that number to calculate the number of customers and orders are needed to create a database of that size based on a ratio of how many rows were in the 100 GB large database in previous versions of the DVD Store.  This number of rows is then used to create the load files using some c programs that are part of the DVD Store kit.

When the InstallDVDStore.pl script completes, a set of data files and database creation scripts are ready to be used to create the database.  This gives you the opportunity to make changes or tweaks to the scripts if wanted before the creation.  It also allows you to use the same set of scripts to create the database again without having to rerun the perl script.  Run the createall script and the database will be created and loaded with data, resulting in the size database that you specified.

Todd

Thursday, January 20, 2011

New Version of DVD Store Released

After months of work, DVD Store 2.1 has been officially released.  This new release has some really great new features designed to make it easier to use and more flexible.  DVD Store is an open source benchmark simulating an online store that Dave Jaffe and I created when I was still working at Dell.   We are continuing to support and maintain it and this new release is the proof.  In fact, DVD Store 2.1 is one  of the workloads in the new VMmark 2.0 benchmark that was just released in December.  The DVD Store supports SQL Server, Oracle, and MySQL databases and includes data creation programs, data loading scripts, database creation scripts, and load generation programs.

The key new features in DVD Store 2.1 are the ability to create any size test database, automated build of custom size database, automated creation of driver program parameter files, improved documentation, and some other smaller things.

We are really excited to have this new release of the DVD Store published.  I'm going to write a series of blog posts over the next couple of weeks to explain the details behind the new DVD Store 2.1 features.  I would also like to thank Girish Khadke who was my intern last summer and contributed many lines of code to DVD Store 2.1.