Social Icons

twitterfacebookgoogle pluslinkedinrss feed

Pages

Thursday 29 March 2012

BI Testing-SQL Performance tuning


Introduction:
Generally ETL performance testing is confirmation test to ensure that an ETL ‘system’ can handle the load of multiple users and transaction.  For any project this is primarily ensuring that the ‘system’ can easily manage the throughput of millions of transactions.
You can improve your application performance by optimizing the queries you use. The following sections outline techniques you can use to optimize query performance.
Improve Indexes:
  • Creating useful indexes is one of the most important ways to achieve better query performance. Useful indexes help you find data with fewer disk I/O operations and less system resource usage.
  • To create useful indexes, you must understand how the data is used, the types of queries and the frequencies they run, and how the query processor can use indexes to find your data quickly.
  • When you choose what indexes to create, examine your critical queries, the performance of which will affect the user’s experience most. Create indexes to specifically aid these queries. After adding an index, rerun the query to see if performance is improved. If it is not, remove the index.
  • As with most performance optimization techniques, there are tradeoffs. For example, with more indexes, SELECT queries will potentially run faster. However, DML (INSERT, UPDATE, and DELETE) operations will slow down significantly because more indexes must be maintained with each operation. Therefore, if your queries are mostly SELECT statements, more indexes can be helpful. If your application performs many DML operations, you should be conservative with the number of indexes you create.
Choose what to Index:
  • We recommend that you always create indexes on primary keys. It is frequently useful to also create indexes on foreign keys. This is because primary keys and foreign keys are frequently used to join tables. Indexes on these keys let the optimizer consider more efficient index join algorithms. If your query joins tables by using other columns, it is frequently helpful to create indexes on those columns for the same reason.
  • When primary key and foreign key constraints are created, SQL Server Compact 3.5 automatically creates indexes for them and takes advantage of them when optimizing queries. Remember to keep primary keys and foreign keys small. Joins run faster this way.
Use Indexes with Filter Clauses
  • Indexes can be used to speed up the evaluation of certain types of filter clauses. Although all filter clauses reduce the final result set of a query, some can also help reduce the amount of data that must be scanned.
  • A search argument (SARG) limits a search because it specifies an exact match, a range of values, or a conjunction of two or more items joined by AND. It has one of the following forms:
Understand Response Time Vs Total Time:
  • Response time is the time it takes for a query to return the first record. Total time is the time it takes for the query to return all records. For an interactive application, response time is important because it is the perceived time for the user to receive visual affirmation that a query is being processed. For a batch application, total time reflects the overall throughput. You have to determine what the performance criteria are for your application and queries, and then design accordingly.
Example:
  • Suppose the query returns 100 records and is used to populate a list with the first five records. In this case, you are not concerned with how long it takes to return all 100 records. Instead, you want the query to return the first few records quickly, so that you can populate the list.
  • Many query operations can be performed without having to store intermediate results. These operations are said to be pipelined. Examples of pipelined operations are projections, selections, and joins. Queries implemented with these operations can return results immediately. Other operations, such as SORT and GROUP-BY, require using all their input before returning results to their parent operations. These operations are said to require materialization. Queries implemented with these operations typically have an initial delay because of materialization. After this initial delay, they typically return records very quickly.
  • Queries with response time requirements should avoid materialization. For example, using an index to implement ORDER-BY, yields better response time than using sorting. The following section describes this in more detail.
Index the ORDER-BY / GROUP-BY / DISTINCT Columns for Better Response Time
  • The ORDER-BY, GROUP-BY, and DISTINCT operations are all types of sorting. The SQL Server Compact 3.5 query processor implements sorting in two ways. If records are already sorted by an index, the processor needs to use only the index. Otherwise, the processor has to use a temporary work table to sort the records first. Such preliminary sorting can cause significant initial delays on devices with lower power CPUs and limited memory, and should be avoided if response time is important.
  • In the context of multiple-column indexes, for ORDER-BY or GROUP-BY to consider a particular index, the ORDER-BY or GROUP-BY columns must match the prefix set of index columns with the exact order. For example, the index CREATE INDEX Emp_Name ON Employees (“Last Name” ASC, “First Name” ASC) can help optimize the following queries:
    • .. ORDER BY / GROUP BY “Last Name” …
    • … ORDER BY / GROUP BY “Last Name”, “First Name” …
It will not help optimize:
  • … ORDER BY / GROUP BY “First Name” …
  • … ORDER BY / GROUP BY “First Name”, “Last Name” …
For a DISTINCT operation to consider a multiple-column index, the projection list must match all index columns, although they do not have to be in the exact order. The previous index can help optimize the following queries:
  • … DISTINCT “Last Name”, “First Name” …
  • … DISTINCT “First Name”, “Last Name” …
It will not help optimize:
  • … DISTINCT “First Name” …
  • … DISTINCT “Last Name” …
Rewrite Subqueries to Use JOIN
Sometimes you can rewrite a subquery to use JOIN and achieve better performance. The advantage of creating a JOIN is that you can evaluate tables in a different order from that defined by the query. The advantage of using a subquery is that it is frequently not necessary to scan all rows from the subquery to evaluate the subquery expression. For example, an EXISTS subquery can return TRUE upon seeing the first qualifying row.
Example:
To determine all the orders that have at least one item with a 25 percent discount or more, you can use the following EXISTS subquery:
SELECT “Order ID” FROM Orders O
WHERE EXISTS (SELECT “Order ID”
FROM “Order Details” OD
WHERE O.”Order ID” = OD.”Order ID”
AND Discount >= 0.50)
You can rewrite this by using JOIN:
SELECT DISTINCT O.”Order ID” FROM Orders O INNER JOIN “Order Details”
OD ON O.”Order ID” = OD.”Order ID” WHERE Discount >= 0.50
Limit Using Outer JOINs
OUTER JOINs are treated differently from INNER JOINs in the optimizer. It does not try to rearrange the join order of OUTER JOIN tables as it does to INNER JOIN tables. The outer table (the left table in LEFT OUTER JOIN and the right table in RIGHT OUTER JOIN) is accessed first, followed by the inner table. This fixed join order could lead to execution plans that are less than optimal.
Use Parameterized Queries:
  • If your application runs a series of queries that are only different in some constants, you can improve performance by using a parameterized query. For example, to return orders by different customers, you can run the following query:
  • SELECT “Customer ID” FROM Orders WHERE “Order ID” = ?
  • Parameterized queries yield better performance by compiling the query only once and executing the compiled plan multiple times. Programmatically, you must hold on to the command object that contains the cached query plan. Destroying the previous command object and creating a new one destroys the cached plan. This requires the query to be re-compiled. If you must run several parameterized queries in interleaved manner, you can create several command objects, each caching the execution plan for a parameterized query. This way, you effectively avoid re-compilations for all of them.
17 Tips for Avoiding Problematic Queries
1. Avoid Cartesian products
2. Avoid full table scans on large tables
3. Use SQL standards and conventions to reduce parsing
4. Lack of indexes on columns contained in the WHERE clause
5. Avoid joining too many tables
6. Monitor V$SESSION_LONGOPS to detect long running operations
7. Use hints as appropriate
8. Use the SHARED_CURSOR parameter
9. Use the Rule-based optimizer if I is better than the Cost-based optimizer
10. Avoid unnecessary sorting
11. Monitor index browning (due to deletions; rebuild as necessary)
12. Use compound indexes with care (Do not repeat columns)
13. Monitor query statistics
14. Use different tablespaces for tables and indexes (as a general rule; this is old-school somewhat, but the main point is reduce I/O contention)
15. Use table partitioning (and local indexes) when appropriate (partitioning is an extra cost feature)
16. Use literals in the WHERE clause (use bind variables)
17. Keep statistics up to date
Conclusion
ETL projects today are designed for correct functionality and adequate performance, i.e., to complete within a time window. However, the task of optimizing ETL designs is left to the experience and intuition of the ETL designers. In addition, ETL designs face additional objectives beyond performance.

ETL testing Fundamentals


Introduction:
Comprehensive testing of a data warehouse at every point throughout the ETL (extract, transform, and load) process is becoming increasingly important as more data is being collected and used for strategic decision-making. Data warehouse or ETL testing is often initiated as a result of mergers and acquisitions, compliance and regulations, data consolidation, and the increased reliance on data-driven decision making (use of Business Intelligence tools, etc.). ETL testing is commonly implemented either manually or with the help of a tool (functional testing tool, ETL tool, proprietary utilities). Let us understand some of the basic ETL concepts.
BI / Data Warehousing testing projects can be conjectured to be divided into ETL (Extract – Transform – Load) testing and henceforth the report testing.
Extract Transform Load is the process to enable businesses to consolidate their data while moving it from place to place (i.e.) moving data from source systems into the data warehouse. The data can arrive from any source:
Extract - It can be defined as extracting the data from numerous heterogeneous systems.
Transform - Applying the business logics as specified b y the business on the data derived from sources.
Load - Pumping the data into the final warehouse after completing the above two process. The ETL part of the testing mainly deals with how, when, from, where and what data we carry in our data warehouse from which the final reports are supposed to be generated. Thus, ETL testing spreads across all and each stage of data flow in the warehouse starting from the source databases to the final target warehouse.
Star Schema
The star schema is perhaps the simplest data warehouse schema. It is called a star schema because the entity-relationship diagram of this schema resembles a star, with points radiating from a central table. The center of the star consists of a large fact table and the points of the star are the dimension tables.
A star schema is characterized by one OR more of very large fact tables that contain the primary information in the data warehouse, and a number of much smaller dimension tables (OR lookup tables), each of which contains information about the entries for a particular attribute in the fact table.
A star query is a join between a fact table and a number of dimension tables. Each dimension table is joined to the fact table using a primary key to foreign key join, but the dimension tables are not joined to each other. The cost-based optimizer recognizes star queries and generates efficient execution plans for them. A typical fact table contains keys and measures. For example, in the sample schema, the fact table sales, contain the measures, quantity sold, amount, average, the keys time key, item-key, branch key, and location key. The dimension tables are time, branch, item and location.
Snow-Flake Schema
The snowflake schema is a more complex data warehouse model than a star schema, and is a type of star schema. It is called a snowflake schema because the diagram of the schema resembles a snowflake. Snowflake schemas normalize dimensions to eliminate redundancy. That is, the dimension data has been grouped into multiple tables instead of one large table.
For example, a location dimension table in a star schema might be normalized into a location table and city table in a snowflake schema. While this saves space, it increases the number of dimension tables and requires more foreign key joins. The result is more complex queries and reduced query performance. Figure above presents a graphical representation of a snowflake schema.
When to use star schema and snowflake schema?
When we refer to Star and Snowflake Schemas, we are talking about a dimensional model for a Data Warehouse or a Datamart. The Star schema model gets it name from the design appearance because there is one central fact table surrounded by many dimension tables. The relationship between the fact and dimension tables is created by PK -> FK relationship and the keys are generally surrogate to the natural or business key of the dimension tables. All data for any given dimension is stored in the one dimension table. Thus, the design of the model could potentially look like a STAR. On the other hand, the Snowflake schema model breaks the dimension data into multiple tables for the purpose of making the data more easily understood or for reducing the width of the dimension table. An example of this type of schema might be a dimension with Product data of multiple levels. Each level in the Product Hierarchy might have multiple attributes that are meaningful only to that level. Thus, one would break the single dimension table into multiple tables in a hierarchical fashion with the highest level tied to the fact table. Each table in the dimension hierarchy would be tied to the level above by natural or business key where the highest level would be tied to the fact table by a surrogate key. As you can imagine the appearance of this schema design could resemble the appearance of a snowflake.
Types of Dimensions Tables
Type 1: This is straightforward r e f r e s h . The fields are constantly overwritten and history is not kept for the column. For example should a description change for a Product number,the old value will be over written by the new value.
Type 2: This is known as a slowly changing dimension, as history can be kept. The column(s) where the history is captured has to be defined. In our example of the Product description changing for a product number, if the slowly changing attribute captured is the product description, a new row of data will be created showing the new product description. The old description will still be contained in the old.
Type 3: This is also a slowly changing dimension. However, instead of a new row, in the example, the old product description will be moved to an “old value” column in the dimension, while the new description will overwrite the existing column. In addition, a date stamp column exists to say when the value was updated. Although there will be no full history here, the previous value prior to the update is captured. No new rows will be created for history as the attribute is measured for the slowly changing value.
Types of fact tables:
Transactional: Most facts will fall into this category. The transactional fact will capture transactional data such as sales lines or stock movement lines. The measures for these facts can be summed together.
Snapshot: A snapshot fact will capture the current data for point for a day. For example, all the current stock positions, where items are, in which branch, at the end of a working day can be captured.
Snapshot fact measures can be summed for this day, but cannot be summed across more than 2 snapshot days as this data will be incorrect.
Accumulative: An accumulative snapshot will sum data up for an attribute, and is not based on time. For example, to get the accumulative sales quantity for a sale of a particular product, the row of data will be calculated for this row each night – giving an “accumulative” value.
Key hit-points in ETL testing are:There are several levels of testing that can be performed during data warehouse testing and they should be defined as part of the testing strategy in different phases (Component Assembly, Product) of testing. Some examples include:
1. Constraint Testing: During constraint testing, the objective is to validate unique constraints, primary keys, foreign keys, indexes, and relationships. The test script should include these validation points. Some ETL processes can be developed to validate constraints during the loading of the warehouse. If the decision is made to add constraint validation to the ETL process, the ETL code must validate all business rules and relational data requirements. In Automation, it should be ensured that the setup is done correctly and maintained throughout the ever-changing requirements process for effective testing. An alternative to automation is to use manual queries. Queries are written to cover all test scenarios and executed manually.
2. Source to Target Counts: The objective of the count test scripts is to determine if the record counts in the source match the record counts in the target. Some ETL processes are capable of capturing record count information such as records read, records written, records in error, etc. If the ETL process used can capture that level of detail and create a list of the counts, allow it to do so. This will save time during the validation process. It is always a good practice to use queries to double check the source to target counts.
3. Source to Target Data Validation: No ETL process is smart enough to perform source to target field-to-field validation. This piece of the testing cycle is the most labor intensive and requires the most thorough analysis of the data. There are a variety of tests that can be performed during source to target validation. Below is a list of tests that are best practices:
4. Transformation and Business Rules: Tests to verify all possible outcomes of the transformation rules, default values, straight moves and as specified in the Business Specification document. As a special mention, Boundary conditions must be tested on the business rules.
5. Batch Sequence & Dependency Testing: ETL’s in DW are essentially a sequence of processes that execute in a particular sequence. Dependencies do exist among various processes and the same is critical to maintain the integrity of the data. Executing the sequences in a wrong order might result in inaccurate data in the warehouse. The testing process must include at least 2 iterations of the end–end execution of the whole batch sequence. Data must be checked for its integrity during this testing. The most common type of errors caused because of incorrect sequence is the referential integrity failures, incorrect end-dating (if applicable) etc, reject
records etc.
6. Job restart Testing: In a real production environment, the ETL jobs/processes fail because of number of reasons (say for ex: database related failures, connectivity failures etc). The jobs can fail half/partly executed. A good design always allows for a restart ability of the jobs from the failure point. Although this is more of a design suggestion/approach, it is suggested that every ETL job is built and tested for restart capability.
7. Error Handling: Understanding a script might fail during data validation, may confirm the ETL process is working through process validation. During process validation the testing team will work to identify additional data cleansing needs, as well as identify consistent error patterns that could possibly be diverted by modifying the ETL code. It is the responsibility of the validation team to identify any and all records that seem suspect. Once a record has been both data and process validated and the script has passed, the ETL process is functioning correctly. Conversely, if suspect records have been identified and documented during data validation those are not supported through process validation, the ETL process is not functioning correctly.
8. Views: Views created on the tables should be tested to ensure the attributes mentioned in the views are correct and the data loaded in the target table matches what is being reflected in the views.
9. Sampling: Sampling will involve creating predictions out of a representative portion of the data that is to be loaded into the target table; these predictions will be matched with the actual results obtained from the data loaded for business Analyst Testing. Comparison will be verified to ensure that the predictions match the data loaded into the target table.
10. Process Testing: The testing of intermediate files and processes to ensure the final outcome is valid and that performance meets the system/business need.
11. Duplicate Testing: Duplicate Testing must be performed at each stage of the ETL process and in the final target table. This testing involves checks for duplicates rows and also checks for multiple rows with same primary key, both of which cannot be allowed.
12. Performance: It is the most important aspect after data validation. Performance testing should check if the ETL process is completing within the load window.
13. Volume: Verify that the system can process the maximum expected quantity of data for a given cycle in the time expected.
14.Connectivity Tests: As the name suggests, this involves testing the upstream, downstream interfaces and intra DW connectivity. It is suggested that the testing represents the exact transactions between these interfaces. For ex: If the design approach is to extract the files from source system, we should actually test extracting a file out of the system and not just the
connectivity.
15. Negative Testing: Negative Testing checks whether the application fails and where it should fail with invalid inputs and out of boundary scenarios and to check the behavior of the application.
16. Operational Readiness Testing (ORT): This is the final phase of testing which focuses on verifying the deployment of software and the operational readiness of the application. The main areas of testing in this phase include:
Deployment Test
1. Tests the deployment of the solution
2. Tests overall technical deployment “checklist” and timeframes
3. Tests the security aspects of the system including user authentication and
authorization, and user-access levels.
Conclusion
Evolving needs of the business and changes in the source systems will drive continuous change in the data warehouse schema and the data being loaded. Hence, it is necessary that development and testing processes are clearly defined, followed by impact-analysis and strong alignment between development, operations and the business.

Thursday 15 March 2012

Automation Tool Selection Recommendation


“Automated Testing” means automating the manual testing process currently in use. This requires that a formalized “manual testing process” currently exists in the company or organization. Minimally, such a process includes:
–        Detailed test cases, including predictable “expected results”, which have been developed from Business Functional Specifications and Design documentation.
–        A standalone Test Environment, including a Test Database that is restorable to a known constant, such that the test cases are able to be repeated each time there are modifications made to the application.
Information Gathering
Following are sample questions asked to tester who have been using some the testing tools:
How long have you been using this tool and are you basically happy with it?
How many copies/licenses do you have and what hardware and software platforms are you using?
How did you evaluate and decide on this tool and which other tools did you consider before purchasing this tool?
How does the tool perform and are there any bottlenecks?
What is your impression of the vendor (commercial professionalism, on-going level of support, documentation and training)?
Tools and Vendors
  • Robot – Rational Software
  • WinRunner 7 – Mercury
  • QA Run 4.7 – Compuware
  • Visual Test – Rational Software
  • Silk Test – Segue
  • QA Wizard – Seapine Software
Tools Overview
Robot – Rational Software
–        IBM Rational Robot v2003 automates regression, functional and configuration testing for e-commerce, client/server and ERP applications. It’s used to test applications constructed in a wide variety of IDEs and languages, and ships with IBM Rational TestManager. Rational TestManager provides desktop management of all testing activities for all types of testing.
WinRunner 7 – Mercury
–        Mercury WinRunner is a powerful tool for enterprise wide functional and regression testing.
–        WinRunner captures, verifies, and replays user interactions automatically to identify defects and ensure that business processes work flawlessly upon deployment and remain reliable.
–        WinRunner allows you to reduce testing time by automating repetitive tasks and optimize testing efforts by covering diverse environments with a single testing tool.
QA Run 4.7 – Compuware
–        With QA Run, programmers get the automation capabilities they need to quickly and productively create and execute test scripts, verify tests and analyze test results.
–        Uses an object-oriented approach to automate test script generation, which can significantly increase the accuracy of testing in the time you have available.
Visual Test 6.5 – Rational Software
–        Based on the BASIC language and used to simulate user actions on a User Interface.
–        Is a powerful language providing support for pointers, remote procedure calls, working with advanced data types such as linked lists, open-ended hash tables, callback functions, and much more.
–        Is a host of utilities for querying an application to determine how to access it with Visual Test, screen capture/comparison, script executor, and scenario recorder.
Silk Test – Segue
–        Is an automated tool for testing the functionality of enterprise applications in any environment.
–        Designed for ease of use, Silk Test includes a host of productivity-boosting features that let both novice and expert users create functional tests quickly, execute them automatically and analyze results accurately.
–        In addition to validating the full functionality of an application prior to its initial release, users can easily evaluate the impact of new enhancements on existing functionality by simply reusing existing test casts.
QA Wizard – Seapine Software
–        Completely automates the functional regression testing of your applications and Web sites.
–        It’s an intelligent object-based solution that provides data-driven testing support for multiple data sources.
–        Uses scripting language that includes all of the features of a modern structured language, including flow control, subroutines, constants, conditionals, variables, assignment statements, functions, and more.
Evaluation Criteria
Record and Playback         Object Mapping
Web Testing Object              Identity Tool
Environment Support        Extensible Language
Cost                                            Integration
Ease of Use                             Image Testing
Database Tests                     Test/Error Recovery
Data Functions                    Object Tests
Support

3 = Basic  2 = Good  1 = Excellent

Tool Selection Recommendation

Tool evaluation and selection is a project in its own right.
It can take between 2 and 6 weeks. It will need team members, a budget, goals and timescales.
There will also be people issues i.e. “politics”.
Start by looking at your current situation
– Identify your problems
– Explore alternative solutions
– Realistic expectations from tool solutions
– Are you ready for tools?

Make a business case for the tool

–What are your current and future manual testing costs?
–What are initial and future automated testing costs?
–What return will you get on investment and when?

Identify candidate tools

– Identify constraints (economic, environmental, commercial, quality, political)
– Classify tool features into mandatory & desirable
– Evaluate features by asking questions to tool vendors
– Investigate tool experience by asking questions to other tool users Plan and schedule in-house demonstration by vendors
– Make the decision

Choose a test tool that best fits the testing requirements of your organization or company.

An “Automated Testing Handbook” is available from the Software Testing Institute (www.ondaweb.com/sti), which covers all of the major considerations involved in choosing the right test tool for your purposes.

Wednesday 18 January 2012

Test Consulting: How to Improve a QA Area


Is your client having difficulty to measure QA performance? Is your client undecided on what test strategy to follow for new application implementation? Is your client looking for testing tool recommendations or need to improve the usage of the existing ones?
Hexaware has a dedicated Test Consulting practice where testing experts are involved in providing test consulting services to clients. During the consulting engagements, Hexaware assesses and evaluates the Approach, People and Technology and fill in the gaps by bringing in our domain experience, best practices, frameworks, tools experience, etc. The consulting services also include tools selection, tools optimization and TCoE Creation and/or optimization.
If you want to learn more about test strategy, the next information will help you to execute a test strategy engagement.
Test Strategy Approach
The first step is to identify the problem(s) that the client is facing and define the strategy objectives. After that, I recommend to follow this approach to execute a test strategy project:
Assessment Areas
As part of the information gathering phase, we leverage its proprietary ATPTM (Approach, People, and Technology) methodology to focus on the right areas and meet the strategy objectives. Hexaware’s APT™ Methodology is the foundation of all of our QA service offerings, below is the description of each component:
• The Approach component is designed to lay the foundation for the processes that each client use as part of testing
People is the component of the IT organization with focus on testing, Hexaware analyze the groups, roles and responsibilities involved in QA and testing
• The Technology component includes the use of QA and automation testing tools for efficiency to optimize technology and lower costs.

At the end of the project, we provide to the client the following component:
•Current State: An analysis of the current state with regard to the testing objectives
•Gaps: The gaps found between the current state and the best practices and the desired state
•Recommendations: Our recommendations to close the gaps and meet the organization objectives
•Implementation Road Map: A recommended path to follow in order to implement the recommendations.
As a result of the analysis phase, we show to our clients the current state in a quantitative graph. This graph evaluates all relevant aspects of an IT organization and prioritizes each category according to the testing objectives. One example of this graph is showed below.

This was an assessment provided as part of a test strategy we created for a leading bank in Mexico for a T24 product implementation. The benefits showed in this strategy was to reduce testing cycles by 30%, automate at least 50% of the manual test cases and have a defect free implementation using robust and repeatable testing processes.
Other examples of metrics commonly used as part of the strategy objectives are the increment of automation coverage by 30%, increase productivity by 25%, reduce overall testing cost by 15%, etc.
A test consulting practice is an area full of innovation, industry best practices and shared experiences. Now with Hexaware blogs all of us will be able to formally share our experience and our colleagues can leverage them for future assignments.

Monday 28 March 2011

Automation Feasibility Checklist


Abstract:
Software testing is an activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. The difficulty in software testing stems from the complexity of software. Testing is more than just debugging. The purpose of testing can be quality assurance, verification and validation or reliability estimation. Testing can be used as a generic metric as well.
In this article, we came up with an Automation Feasibility Checklist (AFC) which would be helpful for effective and efficient automation testing.
Introduction:
Manual Test case is the entry point for any automation, but there are some criteria which hinders the automation process which includes:
  • Lack of clarity in test steps
  • Lack of test data
  • Traceability coverage
  • Identification of Dependencies/Pre-requisites of test cases in a business scenario
  • Presence of logical closure
It is better to identify these problems earlier so that it can reduce the effort in manual execution thereby pave way for effective automation.
Automation Feasibility Checklist (AFC):
Automation Feasibility Checklist is used to identify whether the manual test case is feasible for automation or not. The following are the criteria to determine the automation feasibility of the test cases:
Essential Criteria
  • Dependencies/Pre-requisites
  • Detailed Test steps
  • Test Data availability
  • Expected results
  • Traceability
Optional Criteria
  • Meagre Application Knowledge
  • Subject Matter Expert’s (SME) support
  • Duplication of test steps
  • Availability of multiple sets of data
Snapshot of AFC
Benefits of Automation Feasibility Checklist:
  • Reduces the Manual execution effort.
  • Improves Automation efficiency.
  • Helps to derive effective Manual test cases.
  • Control and avoid risks in Automation.

Thursday 17 March 2011

Welcome back … Good Times!


As the winter recedes and spring is ready to bloom in the northern hemisphere – the fears of doom and gloom recede too. If one looks at the IT Industry worldwide with lens of India-based software service providers, the sense of déjà vu is back. Fond memories of the good times come rushing back.

Now you may ask; what makes me say this?  After a dismal period ranging from late 2008 to early 2010; the tide has turned. After few years, this could well be the first year where Fortune 500 companies may have a year-on-year increase in their planned IT Spend. Further, the stress that had engulfed everyone from the Wall Street to the Main Street and everyone in the middle has dwindled and the early signs of a return to growth have clearly emerged. The visible trends, sound bites cuts across most industry verticals which further reinforce the confidence that IT Industry could register healthy growth in 2011.

You could also ask me; have we gone to the good old days again? I would say – Well, that is where we want to go but we are not there yet. While the mood is buoyant, all the pieces of the jig saw puzzle have not fallen into place yet. There are pockets of concern particularly in continental Europe which are still battling sovereign debt, geo-political uncertainties in Middle East, North Africa and natural calamities in Japan. Hence, it may take a little while longer for all the regions to buzz in unison. Having sounded the cautionary tone, the mood has certainly swung in favor of committing investments; driving growth and leveraging the winds of change.

If you agree with this, you may ask – what does the return to growth mean to me? It certainly signals good times! It means the activity levels will remain high. New clients will be added. New deals will be signed. The clients will reap the benefits of IT Outsourcing and the Service Providers will deliver value. The economics will work in favor of both economies and the employees at both sides will be happy to see more greenbacks (or whatever the color of your currency).
So, join me in ushering in the Good Times again!
===============================================================
The author of the piece is: Sreenivas V. He is the Chief Strategy Officer at Hexaware Technologies Limited. The contents on the blog above are his personal views.

Thursday 8 July 2010

Marketing Automation in B2B – Separating the Wheat from the Chaff


The B2B landscape in its inherent form is a complex jar of beans primarily because the initial connection needs lots of nurturing with the right mix of appropriate communication to ensure the “best weather” for sales interaction. Marketers not only have to measure outcomes right up to revenue but also find the “sweet spot” for marketing and sales to drum up the right notes.
The year 2009 and the first half of 2010 saw a marked shift towards marketing automation worldwide. All of this has helped channelize information and reach out to prospects better, yet it is pertinent to note the today’s internet savvy prospect is also armed with qualifying information about your brand, your products and your competitors as never before. To get inside the mind of the B2B buyer, marketers not only need to understand his intent from his digital body language but also ensure that automated lead generation processes in place scale up in terms of the following pertinent factors at any point of time.
  • Are lead recycling programs in place for not-sales-ready leads?
  • Has social media, inbound marketing and marketing automation been integrated seamlessly?
  • Is your marketing communication supported by buyer-centric collaterals that help the buyer decide in your favor?
  • Has your data been data washed and scrubbed clean?
  • Does your web metrics provide actionable information for user profiling and conversion?
  • Are sales and marketing on the same latitude to proving your prospect the best buying experience?
  • Does your Social media spin influence the markets conversation about your brand effectively?
  • Is your opt-in list getting fresh brew in the form of persuasive communication and supporting newsletter value?
  • Is your marketing funnel measurable and process definitions flexible to innovation?
  • How effective is your conversation model, does it ensure that you are at top of mind when prospects decide to bite the bait?
  • Are you able to capitalize on marketing automation’s great benefit – reporting effectively and use it as a strategic tool?
  • Does your data-centric marketing plans lead the way for greater customer intelligence since value of data will not be a constant ?
The above are just a few important cogs that can make or break your lead generation wheel. As marketers brace themselves to capitalize on marketing automation to enhance pipeline opportunities, trends all point to an explosive growth in marketing automation adoption. It is highly imperative that automation vendors provide more sophisticated reporting, better sales engagement processes and social media integration.
To help marketing efficiently separate the wheat from the chaff, marketing automation should not just serve as a driver of operational efficiency but more importantly enhance continuity of dialogue with prospects throughout the decision making/buying cycle at all relevant touch points.
Ultimately it is all about the harvest – the pipeline and revenue, the executive leadership would not mind how you do it.

Wednesday 3 June 2009

Databases and Marketing


Really exciting times to be in Marketing. Well, personally for me they are (the geek in me just loves how databases and analytics have become so critical to marketing! :) )
It is incredible how central data is becoming to the art and science of marketing. Infact marketing nowadays is so data driven that it is more science than art.
And I am not referring to ROI and marketing measurement data. Usage of that data is to be able to speak the same language as the CFO and CEO; to get a seat at the table in the executive suite. Something that the marketing organization has had to learn to meet the CFO’s standards.
I am referring to a data driven approach that has been driven by the marketing department itself. And this has been driven by the marketing department’s desire to run programs that are not gambles. Campaigns that are designed with the customers and their behaviour in mind and therefore hit the target more often than not. And that has brought us today to a time where databases and analytics are critical for marketing to succeed.
Not being familar with database management and analytics is not an option anymore.

Monday 25 May 2009

The Power of Now – Paradigm shift in Digital Marketing


For the initiated, and those who have wet their beaks in Digital Marketing, today’s times are really exciting. And yet, there is constant pressure for marketing strategies to evolve constantly. There is a big shift happening in the way information is served and utilized.
Information is now all about “newness” and “nowness”. It is dynamic, constantly reinventing itself to stay in the rat race for attention. It is just not about standalone pages or browsers anymore. It’s more about streaming content, about relevance in constant churn, about adding enhanced value to your user segment, all of these building a conversation for a symbiotic relationship with a user segment.
Streams of Data, Not just static pools
Much water has flown under the internet highway since the advent of Really Simple Syndication (RSS). Social media networks have taken center stage and dynamic information is served and lapped up in myriad ways. The possibilities are endless – A chirpy tweet about an event real-time, a face-off with a corporate ambassador on face book, a stumble upon on your new proprietary IP micro site, a techno-rati-ing of your corporate blog, rank boost on a Wikipedia link etc.
Amid all this noise, it is quite important to make sense of the signals. Especially for B2B marketing which has to make sure that this shift in dynamics is effectively leveraged to generate awareness, drive leads and nurture relationships.
Swimming with the tide of dynamic information
The moot point here is whether this accent on immediacy of information holds intrinsic value for B2B marketing. Yes of course. Ignoring this new shift in information distribution would be a no brainer. The key here is to use these social streams to row people to your site and ensure the site is sticky enough for him to decide in your favor.
Social streams can help build reputation and trust that can help connect with your customers. In lots of ways it gives B2B marketers an easy way to participate in conversations that are about them. It is still early to tell though. Plunging into the stream would be the easy way to find out though.
This paradigm shift has come to stay and technology will be its prime driver. With Twitter going one up on Google on real time search stakes (an earthquake this minute appears faster in twitter than on Google) this information shift could forever change the way even a search result looks.

Monday 18 May 2009

Digital Media Marketing


The power of digital media (usually referred to online media) has inspired me a lot through these years or rather never ceases to amaze me!
In today’s discriminating and techno-savvy world, with almost every business accomplished through the internet, a company’s presence over the web has become mandatory or should I say ‘a crucial factor’?
Here are some of my observations how online channels are used by businesses today.

Realize
Based on my experience in this field (I’m not a techie thoughJ), I have found that very few companies have exploited digital media through their websites. I think every company needs to realize the importance of their presence on the web with the ability to act fast, engage a lot of creativity and utilize resources capable of delivering innovative ideas keeping in mind their budgetary constraints.
Web metrics play a crucial role in determining the type of content that can attract and hold a customer’s interest, make them stay on the same website for a long time and make them come back for more – ideally meeting the user demand. Today, static images and plain text are replaced with streaming media and much more interactive applications providing the customer with a personalized experience. I feel many companies are struggling to develop their web presence in making it into more productive, virtual and a truly aggressive weapon.
Though a company has a string of issues involved in managing their web presence effectively, they also have to emphasize on the digital media’s potential in transforming the entire market scenario thereby ensuring greater ROI.
I would also add that opportunities are in abundance for any company willing to try out on these lines!
React
As I was digging through an article on digital media, a particular sentence caught my attention which in fact is apparently inevitable. It says, ‘A website has 10 seconds to draw a visitor or the person will “go back to Google”’! Well, are we talking here about “Love at first site”? Yes we are!
In this fast changing world, one has to face the reality that none of the above turns into a reality so easily. Companies have to think from the customers’ perspective, analyze and optimize his or her experience on the website by tuning their infrastructure, improve data center capability and effectively manage content delivery, thereby nurturing relationships.
It’s a do-or-die situation where IT organizations especially need to focus and react immediately on enhancing the digital business in line with their business objectives.
Reap
There’s nothing compared to reaping enhanced ROI through comprehensive digital media marketing in today’s world which is going through a major financial recession. Though I might daresay that it would be a silver lining amidst the gloom, a well defined approach with a strategic network, leveraged intelligence and the intent of providing a rich customer experience ensures improved business responsiveness and better cost control.
Well I do want to write more on the other areas of digital media in the forth coming posts. I would honestly need your comments and suggestions.