Wednesday, April 16, 2014

MySQL Server QA is hiring

Hi,
I am hiring for my team in Bangalore. If you are passionate about databases and testing please send your resume to anitha.gopi@oracle.com.


Job details at https://irecruitment.oracle.com/OA_HTML/OA.jsp?OAFunc=IRC_VIS_VAC_DISPLAY&OAMC=R&p_svid=2477587&p_spid=2529909

Monday, April 7, 2014

Test improvements in MySQL 5.7.4

Here is a summary of the improvements to MTR test tool and suite in 5.7.4

New Tests

Added 69 new tests and enhanced several existing tests in the MTR suite.

Test Suite migration

Test suite migration activity is continuing and in 5.7.4 we completed migration of replication suites rpl/binlog and about 75% of main suite. Relevant WLs are:
  • WL#6921  Migrate rpl suite to run with innodb engine
  • WL#6922  Migrate binlog suite to run with innodb storage engine
  • WL#7263 Migrate myisam specific tests
  • WL#7405 Migrate partition tests in the main suite
  • WL#7402 Migrate tests in main suite for federated, blackhole, merge and csv engine
  • WL#7411 Migrate tests that do not show any result difference when run with innodb engine
  • WL#7410 Migrate authentication tests in main suite
  • WL#7404 Migrate ctype tests in the main suite

Test Suite for replication (rpl) made GTID agnostic

Global Transaction Identifiers (GTID) was introduced in 5.6 and MTR tests were added specifically for testing GTID. To improve test coverage we wanted to run all tests in rpl suite with GTID ON. This was not possible because many will have result differences and fail. We have addressed this in 5.7.4  and have added regular regression runs of rpl suite with GTID turned on. Here is an example command line that is run regularly:
perl mysql-test-run.pl –force –timer –debug-server –parallel=auto –experimental=collections/default.experimental –comment=rpl_gtid-debug –vardir=var-rpl_gtid-debug –suite=rpl –mysqld=–enforce-gtid-consistency –mysqld=–log-slave-updates –mysqld=–gtid-mode=on –skip-test-list=collections/disabled-gtid-on.list –big-test –testcase-timeout=60 –suite-timeout=360
Refer WL#7205 for more details.

Moved InnoDB compression tests to a separate suite

Tests for InnoDB compression were part of the InnoDB suite and hence it was not easy to run only these tests with different compression options. Compression tests are now moved to a new suite called innodb_zip. This change is done from 5.5 onwards and following command lines were added to regression runs.
  • perl mysql-test-run.pl –vardir=var-compressed_log0 –force –big-test –comment=compressed_log0 –testcase-timeout=60 –debug-server –parallel=auto –experimental=collections/default.experimental –mysqld=–innodb-log-compressed-pages=0 –suite=innodb_zip
  • perl mysql-test-run.pl –vardir=var-compressed_log1 –force –big-test –comment=compressed_log1 –testcase-timeout=60 –debug-server –parallel=auto –experimental=collections/default.experimental –mysqld=–innodb-log-compressed-pages=1 –suite=innodb_zip
  • perl mysql-test-run.pl –vardir=var-compressed_log0_level1 –force –big-test –comment=compressed_log0_level1 –testcase-timeout=60 –debug-server –parallel=auto –experimental=collections/default.experimental –mysqld=–innodb-log-compressed-pages=0 –mysqld=–innodb-compression-level=1 –suite=innodb_zip
  • perl mysql-test-run.pl –vardir=var-compressed_log1_level9 –force –big-test –comment=compressed_log1_level9 –testcase-timeout=60 –debug-server –parallel=auto –experimental=collections/default.experimental –mysqld=–innodb-log-compressed-pages=1 –mysqld=–innodb-compression-level=9 –suite=innodb_zip
  • perl mysql-test-run.pl –vardir=var-compressed_log0_level9_4k –force –big-test –comment=compressed_log0_level9_4k –testcase-timeout=60 –debug-server –parallel=auto –experimental=collections/default.experimental –mysqld=–innodb-log-compressed-pages=0 –mysqld=–innodb-compression-level=9 –mysqld=–innodb-page-size=4k –suite=innodb_zip
  • perl mysql-test-run.pl –vardir=var-compressed_log1_level1_8k –force –big-test –comment=compressed_log1_level1_8k –testcase-timeout=60 –debug-server –parallel=auto –experimental=collections/default.experimental –mysqld=–innodb-log-compressed-pages=1 –mysqld=–innodb-compression-level=1 –mysqld=–innodb-page-size=8k –suite=innodb_zip

Minor enhancement to mysql-test-run.pl

A new option “–do-test-list was added to mysql-test-run.pl . This will take a file name as an argument and run the tests listed in the file.
eg: perl mysql-test-run.pl   do-test-list=mytests.list
$cat mytests.list
federated.federated
sys_vars.all_vars
main analyze
main archive
main blackhole
Above command will run all tests listed in mytests.list. This is useful in development when we are working on a feature that will have impact on tests spread across multiple suites. The relevant tests can now be grouped together and run with a simple command line.

Thursday, March 6, 2014

Migration of MTR suites to use InnoDB (continued …)

This is a continuation of my post on migration-of-mtr-suites-to-use-innodb . To set the context here is a quick recap.
MySQL 5.5 had the following changes with respect to the default engine
  • Default engine in the server changed from MyISAM to InnoDB
  • MTR modified to start server with the old default MyISAM. (This was required because historically most test results were recorded with MyISAM and many of them would fail if tests were run with the new server default,  InnoDB)
  • Tests retained as is in 5.5 and planned to migrate them to run with default engine in a future release
In MySQL 5.7 release we have started the migration project. Right in the beginning we realized that the switch of default engine had an unexpected side effect. MTR tests that were developed post 5.5 were also getting recorded and run with MyISAM unless an engine was explicitly specified in the test.  I.e. we were in a state where not only old tests, but also new tests were running with the old default engine MyISAM. As a result the backlog of tests that had to be migrated was growing over time. We decided to fix the problem with new tests first and old tests later.

The only way we could force new tests to run with InnoDB by default was to remove the engine switch in MTR. This however would bring back the problem of existing tests failing with result difference. We decided to tackle this problem with a different approach. Instead of switching the engine in the test tool itself, it was switched in the existing tests. This is where the force_myisam_default.inc makes its entry. If this file is included it will tell MTR to start server with MyISAM. It was added to all existing tests that did not have any explicit engine specification at the test level. To be more precise it was added to tests that did not have have_innodb.inc, have_archive.inc etc. We then removed the switch in MTR, so that by default it starts the server with its default engine which is InnoDB. With these 2 changes we got to the state where new tests by default were running with InnoDB.This was a big step forward and on hindsight I think we should have done this instead of the engine switch in 5.5 .

Having fixed the problem with new tests, we have now shifted our focus to migration of old tests. The migration project does not in any way reduce the test coverage on MyISAM; rather it just adds InnoDB coverage where it was missing. Post migration tests for features which are dependent on the engine will have MyISAM and InnoDB variants. Tests for features that are expected to have same behavior on all engines may run only with the default engine InnoDB.
We have outlined following strategy for the migration project:
  • Maintain MyISAM and innodb variants for tests and subtests that are dependent on the engine
  • Remove MyISAM dependencies from tests that are not meant for testing MyISAM features
  • Retain tests for MyISAM only features in MyISAM
There is an element of risk in this project since any small mistake can lead to missing or even worse wrong tests. However be rest assured that this is being handled with utmost care by our strong QA team. To ensure correctness and stability all result differences are analyzed and discussed with the concerned people before making any changes.  Some of the differences were due to bugs and these are fixed before test migration. Some others are expected and are accepted as they are. Listed below are a few examples of the changes made to the result files:
  • Engine name in SHOW CREATE TABLE
  • Adding “sort by” to SELECT queries
  • Adding “analyze table” statement before EXPLAIN to get consistent test output
  • Minor differences to EXPLAIN output
As of 5.7.3 release migration is complete for auth_sec, federated, funcs_1, funcs_2, jp, opt_trace, perfschema, stress and sys_vars suites. It is not all done yet and you will see more and more tests getting migrated in the upcoming milestone releases.

If you have any questions or feedback regarding this project please post them here. I would love to hear what the community thinks about this.

Thursday, January 30, 2014

Migration of MTR suites to use Innodb

In MySQL 5.7.2 a new include file “–source include/force_myisam_default.inc“ made its appearance in most of the .test files in the MTR suite. If you were wondering about this read on.
I will explain this change in two blogs. The first will describe why we are doing this and the next will explain how it is being done.

In order to set the context let me delve a bit into history. Starting from MySQL 5.5, the default storage engine for new tables is InnoDB. This change was coupled with a reverse switch in mysql-test-run, which made MyISAM the default storage engine for the server started through MTR(mysql-test-run). As a result default storage engine in the server was innodb, but most tests were run with the old default MyISAM.

Let me explain why such a switch was required. The usual practice in MTR test development was to not specify an engine in create table statement, unless the test is specifically for a non-default engine like merge, archive, innodb etc. As a result tables in most tests were created with the default engine which is MyISAM. From a test coverage perspective this was a perfectly sound strategy. Tests for engine specific properties were run with the appropriate engines and tests for engine independent properties were run with the default engine, MyISAM.  However this strategy became a problem when the default engine was changed. Tables in tests that do not specify any engine started getting created with the new default innodb leading to result differences and in turn test failures. There were hundreds of failing tests and fixing all this was an enormous task.

To overcome this situation following 2 options were considered:
  • Option 1 : Migrate all tests to run with innodb engine. This would involve analyzing all test failures and modifying the test or result files as required.
  • Option 2: Switch the default storage engine in MTR so that there is no need to change the test or result files.
Option 1 would have been the ideal, but it became quickly obvious that this will be a time and resource intensive solution. On the other hand option 2 was easy, but risky. A careful evaluation of the test suite showed that the risk with option 2 is not very high. The tests that did not have any engine specified were either tests for MyISAM only features or tests that were not considered to be sensitive to the engine. For everything else there were engine specific variants of the test. For example the partitioning test has 6 flavors – partition_archive, partition_federated, partition_csv, partition_innodb, partition_blackhole and partition_myisam . So the risk with option 2 was not very significant and was acceptable at least in the short term.

After evaluating the pros and cons of both options  it was decided to go with option 2 for 5.5 release and to implement option 1 in a future release. MySQL 5.5 and 5.6 were released using MyISAM as the default in MTR suite. In 5.7 we have started implementing option 2 and this is is the reason for the force_myisam_default.inc.

Did I confuse you? I said migrate to innodb and the inc file says force_myisam_default. Well, if you are confused wait for my next blog :)

Monday, November 11, 2013

Test Automation – Does it put your job at risk?



Some years back I attended a talk on testing practices where the presenter asked the audience how much test automation have they achieved in their projects. I was the only one who answered almost 100%. He advised me to keep this a secret from my management if I did not want my team to be downsized :). Good advice, but that got me thinking whether test automation actually makes testers redundant.
Having spent a good part of my experience in testing, I have seen this discipline mature over the years.  My opinion is that automation does not take away tester jobs; on the contrary it makes the job more interesting and effective.
 
In my mind I classify automation progression in an organization into 4 stages as shown below.


Stage 1

In stage 1, testing is a completely manual activity. Tester responsibilities are writing test case documents based on the requirements and functional specification, executing tests manually following the documents, and creating bug reports for any deviations. As the product grows in size and functionality, more test cases are added and more resources will be needed to manually execute them. Sounds like a good recipe for growing your test team? Well, not exactly. This model will not make either your test team or management happy. Here are a few reasons why:

  • Manual test execution is error prone: Human beings are prone to make mistakes especially when they are doing a job that is repetitive and boring. Some invalid bugs may be raised or even worse, some valid bugs may be missed due to errors in the manual execution.
  • Impossible to test everything manually: Testing is all about simulating conditions that are as close as possible to real production behavior.  For e.g. consider testing a website for ticket booking. Some of the typical uses cases will be hundreds of users accessing the website at the same time, 2 or more users trying to book the same flight etc. How do you test these manually? You cannot possibly line up hundreds of testers and ask them to access the website at the sound of a bell.
  • Manual testing cannot scale: With each release of the product new features are added and number of test cases will grow exponentially. It is not possible to add testers at the same pace and as a result the time taken for test execution grows out of control.
  • Unhappy testers: A good tester is a creative person who loves to explore new features and new methods to break existing and new functionality. In stage 1 they get trapped in endless cycles of manual repetitive regression test runs. This makes good testers unhappy and they will not stick to the job for long.
  • Unhappy Management: With a large test suite regression testing will take several days or weeks. It almost becomes impossible to deliver a product on time. This will make the management unhappy with their test team.
These problems will force all organizations to move to stage 2 sooner or later. 

Stage 2

Automation can be introduced when the product has stabilized and does not have frequent changes in functionality. The first step in automation is the identification of an appropriate automation tool. There are several tools available in the market and an appropriate one has to be chosen based on the needs of the project. Very often none of them meet all the requirements and you may have to build an in-house automation tool. For most of the projects that I have worked on we have developed our own tools.
In the initial phase the cost of automation will be more than manual test execution. The organization would have built up a huge backlog of test cases over some time and these cannot be automated overnight. Automation itself will be slow until the testers gain sufficient experience in the tool and framework. More over until a good percentage of tests are automated manual test execution has to continue in parallel with the automation activity. The real benefits start showing only in the long run when a good percentage of tests are automated. It is important that the management support the testing organization in this ramp up phase.
As the experience with tools and techniques increases automation starts getting pushed into earlier stages of the development process. Today we have reached a state where it is possible to develop automated tests in parallel with the product so that testing can start as soon as the product is ready. In most projects I have worked on in the recent years we have achieved close to 100% test automation.

Stage 3

In this stage the organization starts thinking about how to bring in more automation. The tests are automated, but the test runs still had to be triggered manually for each build. Many days passed between test execution cycles and lots of new code was added in this time. As a result with each test run new problems are discovered. This is a very inefficient way of finding and fixing problems.
Solution to this comes in the form of continuous integration testing (CIT) tools. These can trigger test runs automatically at a frequency that is configurable. There are many CIT tools in the market. We are using Hudson which is a very popular open source tool.
With the availability of CIT tools, there will be a temptation to run all the tests for every check in. Remember that this also comes at a cost. You need hardware to run tests and running everything everywhere always might be overkill. In our team we have followed a tiered approach.  A small suite that can finish execution in less than an hour is run for every check-in, a larger suite that finishes in 8 hours is run every night and the complete suite is run every week. The weekly test can run for 24 hours all 7 days of the week. The outcome is that bugs are discovered as soon as they are introduced and it became easy to isolate the root cause and fix. As a result we reduced not only the testing time, but also the time taken to find and fix bugs.

Stage 4

Now all tests are automated and they are running without any manual intervention. Is it time to fire the testers? No, we still need someone to analyze the test results. This takes us to the next level, which is automated test failure analysis. A signature is identified for each failure and is associated with the corresponding bug in the bug database. Scripts are put in place to compare the failure with the known signatures and tag with the matching bug. Now we have reached the state where no manual intervention is needed for test execution or failure analysis.  

Conclusion

What do the testers do if everything is automated?  They do what they should really do. Spend their time on creative tasks like designing test cases for new features, exploring new ways of testing the product, improving test coverage etc. The repetitive task of manual test execution is best left to the machines :).
 
To conclude, automation does not make testers redundant, rather it frees up their time to be spent on more interesting and useful tasks. A good management will identify the value provided by the testing team and will never consider reducing resources just because tests are automated. Testers are happy in a completely automated regime because their time is spent on test automation and not in repetitive manual test execution. I would say it is a win-win situation for testers and management.

Sunday, November 10, 2013

OSI Days

Me and some of my colleagues are giving MySQL talks at OSI days. Check out the schedule at http://osidays.com/osidays/open-source-india-day-3/. Hope to see you all there.

Sunday, September 22, 2013

Testing improvements in MySQL 5.7.2


Yet another MySQL DMR (5.7.2) is out and here is a short update on the testing improvements in this release.
  • Test suite migration
    • Default storage engine in mysql-test-run.pl (MTR) changed from MyISAM to innodb.
    • Migrated parts, sys_vars, perfschema, funcs_1, funcs_2 and opt_trace  suites to run with innodb. 
    •   MyISAM variants retained for engine dependent tests.
    • Suites that are not yet migrated continue to run with MyISAM using include file force_myisam_default.inc.
  • All new features qualified as per the process described at http://anithagopi.blogspot.in/2013/05/new-feature-qualification.html
  •  Around 150 new MTR tests added already to 5.7
  • Code Coverage at 82.3%