Models and Standards for Software Process Improvement

Software process improvement methodology is defined as definitions of sequence of tasks, tools and techniques which can be used to improve the process of creating software. There are a lot of models for software process improvement, but the better known ones will be shown in this entry along with its explanation and characteristics.


If you want to start developing software for the US government, following the CMMI standard is actually required by rule. In general, this model will be of great use if you want to have an enterprise that develops software in the United States because it is very well known in the country.

Capability Maturity Model Integration (or CMMI in short) is a model administrated by the CMMI institute. This model is followed mainly by large companies that offer quality and business level products, which is why the government often requires this process of improvement in their contracts to get new software. CMMI’s main focus lays on improving risk management.

How it operates:

Capability Maturity Model Integration revolves around something called maturity levels, which are measured taking in account different aspects of whatever enterprise is being analyzed. The levels range from 1 to 5 and represent how well the company is managing its development process and how well are the personnel prepared to follow such process, as well as what must be improved to go up to the best level: level 5.

For each maturity level, there is a specific yet generic goal to be achieved in order to level up, which we can see in the picture above.

For more information in the meaning of each level and improvement steps, you can visit this page.


Team software process (TSP) and personal software process (PSP) are process models that help developing methods improve overall, emphasizing on process methodology and product quality, just like the CMMI, but with a key difference. These two are together because they intend to reach the same type of goal and tend to measure similar things like time of delivery of advances and productivity. The difference lies in to whom it is focused. TSP is more focused for teams and groups of people (not as massive as CMMI) and PSP is focused on individuals, tending to check on discipline and skills.


The international standard ISO-15504 stands for Software Process Improvement Capability Determination (also known as SPICE). This standard model evaluates the process capacity of software products, just like CMMI or TSP/PSP. Getting the certification for ISO-15504 may as useful as CMMI in United States, but for other places like Europe both have more-less equal value. 

ISO-15504 revolves around 3 main focuses to improve or judge which are:

  • Process evaluation
  • Process improvement
  • Process capacity or maturity

In order to test for these 3 main objectives, the model also classifies the state of the company or enterprise in a rank of maturity levels, as well as another chart of capacity levels.

Process capacity evaluation

This evaluation tries to rank the total capacity of a series of processes used by the evaluated company in order to make products (How good is the company’s methods on making quality products). This evaluation is ranked in 5 levels:

  • Level 0 – The process is incomplete
    • Not correctly implemented and does not achieve its objectives.
  • Level 1 – The process works
    • It is implemented and can reach its objectives.
  • Level 2 – The process is managed
    • The process is controlled and its implementation is planned, monitored and adjusted. The results are established, registered, controlled and mantained.
  • Level 3: The process is established
    • The process is documented in order to guarantee the accomplishment of objectives.
  • Level 4: The proces is predictable
    • The process operates accodring to defined performance targets.
  • Level 5: The process is optimized
    • Continously improves to help meet current and future goals.
Process maturity evaluation

This evaluation tries to rank how well-organized and effective a company is to identify, improve and innovate in order to continuously improve the quality of the products. (How good is the company itself on improving and making quality products). This evaluation is ranked in 5 levels:

Level 1: Initial

The organization does not have formal procedures for the evaluation, development and evolution of its applications. When the failure materializes, the possible fundamentals of the method are abandoned to try shortcuts in the realization and validation process. Organizational efforts then return to purely reactive engagement practices, such as “coding and testing,” which amplify the drift.

Level 2: Reproducible

The management of new projects is based on the experience stored in similar projects. The permanent commitment of human resources guarantees the durability of knowledge within the limits of its presence within the organization.

Level 3: Defined

Project management guidelines and procedures are established to enable implementation. The standard software development and evolution process is documented. It is integrated into a consistent comprehensive software engineering and project management processes. A training program has been implemented within the organization to ensure that users and IT professionals acquire the knowledge and skills necessary to take on the roles assigned to them.

Level 4: Managed

The organization establishes quantitative and qualitative objectives. Productivity and quality are evaluated. This control is based on the validation of the main milestones of the project as part of a planned program of measures.

Level 5: Optimized

Continuous process improvement is the main concern. The organization gives itself the means to identify and measure weaknesses in its processes. Seeks the most effective software engineering practices, especially those whose synergy enables continuous quality improvement

For more information about ISO-15504 you can visit here


Inspired by ISO-15504, MOPROSOFT is actually a creation by the Mexican Software Engineering Quality Association (AMCIS)!

Differentiated from ISO-15504 and CMMI, MOPROSOFT is a model designed to consider enterprises that may not be as big as other well-known companies like Microsoft, but would like to achieve global levels of quality in their products. It takes in account the aspects and environment of small and medium companies so the requirements and evaluation level achieved can actually talk in more detail about the state of the company.

Moprosoft divides itself in 3 different evaluation levels that control different aspects of the company:

Direccion (Direction): This level focuses efforts on the company to apply strategic planning and promote an optimal operation.

Gerencia (Management): This level focuses on improving the management of processes, projects and resources.

Operacion (Operation): This level focuses on specific processes of project administration as well as development and maintenance of software.

 For more information about MOPROSOFT you can follow this link!

IDEAL method:

The IDEAL model, created by the Technology Adoption Architectures Team, is named after the five phases that conform it: Initiating, Diagnosing, Establishing, Acting and Learning. Additionally, these phases are further divided into fourteen activities which can be seen in the image below. It is perhaps the most flexible of all the other models mentioned above and like all the previous models, it serves as a roadmap for improvement of the company it is applied to.

The Initiating Phase Critical groundwork is completed during the initiating phase. The business reasons for undertaking the effort are clearly articulated. The effort’s contributions to business goals and objectives are identified, as are its relationships with the organization’s other work. The support of critical managers is secured, and resources are allocated on an order-of-magnitude basis. Finally, an infrastructure for managing implementation details is put in place.

The Diagnosing Phase The diagnosing phase builds upon the initiating phase to develop a more complete understanding of the improvement work. During the diagnosing phase two characterizations of the organization are developed: the current state of the organization and the desired future state. These organizational states are used to develop an approach for improving business practice.

The Establishing Phase The purpose of the establishing phase is to develop a detailed work plan. Priorities are set that reflect the recommendations made during the diagnosing phase as well as the organization’s broader operations and the constraints of its operating environment. An approach is then developed that honors and factors in the priorities. Finally, specific actions, milestones, deliverables, and responsibilities are incorporated into an action plan.

The Acting Phase The activities of the acting phase help an organization implement the work that has been conceptualized and planned in the previous three phases. These activities will typically consume more calendar time and more resources than all of the other phases combined.

The Learning Phase The learning phase completes the improvement cycle. One of the goals of the IDEAL Model is to continuously improve the ability to implement change. In the learning phase, the entire IDEAL experience is reviewed to determine what was accomplished, whether the effort accomplished the intended goals, and how the organization can implement change more effectively and/or efficiently in the future. Records must be kept throughout the IDEAL cycle with this phase in mind.

For more information about the IDEAL model you can follow this link

Software Quality

Software quality is an aspect of software developing that cares about how well software is designed for its intended purpose and how well the requirements and functionalities are followed. This means that the better a code’s requirements are satisfied, the better quality it will have.  But how can we know that a program is of great quality? The definition seems pretty vague on how we can achieve this state because either the code does its intended purpose or not, right? Quality is very dependent on the project. What is defined as a great quality in some program may not necessarily mean the same for others, but we can still get some common traits that we can list

Software Quality Factors

These factors are desirable common traits a piece of software must have for us to be able to say it has quality. Like software quality, the number of factors and the importance of each one are not set in stone. Various people from various times added or removed some of them; therefore there are a lot of quality factors that you can take in consideration for your project, however the factors will not be the absolute solution to determining the quality of software; even if these factors exists, some of them are subjective or hard to have an exact measurement, so we can’t really say with precision how good a program state is, but at least we can have a close enough idea.

This is a list the most remarkable ones according to ISO 25000 standards. As of today, ISO 25000 is the beholder of the current standard of factors to evaluate software.  There is also McCall’s and Boehm’s Quality models which were used before ISO 9126 quality characteristics (which was later replaced by ISO 25000)

Functional Suitability: How well a system functions and completes the needed tasks when used under specified conditions.

Performance efficiency: This characteristic represents the performance relative to the amount of resources used under stated conditions. This characteristic is composed of the following sub-characteristics:

Compatibility: How well a system can exchange information with other systems if necessary

Usability: Degree to which a product or system can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.

Reliability: Degree to which a system, product or component performs specified functions under specified conditions for a specified period of time.

Security: Degree to which a product or system protects information and data so that persons or other products or systems have the degree of data access appropriate to their types and levels of authorization.

Maintainability: This characteristic represents the degree of effectiveness and efficiency with which a product or system can be modified to improve it, correct it or adapt it to changes in environment, and in requirements.

Portability: Degree of effectiveness and efficiency with which a system, product or component can be transferred from one hardware, software or other operational or usage environment to another.

For more information about these standards and ISO 25000 click here!

Software Quality control and assurance

This post has covered what software quality is and has determined where we have to aim to reach a quality state. But how do we get there? How do we shoot to the stars? Do we have to guess? Well, luckily for us, there are two very useful set of activities that we can follow to ensure quality in our software. These very convenient sets are called Software quality control (SQC) and Software quality assurance (SQA), both taking in count the previously seen factors.


It is a set of activities that focuses on establishing and evaluating the process of creating software. As it sounds, SQA focuses on preventing software defects to happen in the first place by organizing the workflow. SQA will be present all the way from how the software requirements are stated and managed all the way to testing and producing. Yes, Software quality assurance is basically telling you to follow a good development life cycle.

How can we apply Software quality assurance?

Project management! Software design! Software development’s good practices! All these are your allies. Software quality assurance will be present when you try to follow a good development life cycle.

SQA focuses not on the product or software itself, but in the process of making it follow the requirements and meet specifications. ISO 9000 offers a very good quality management system in order to accomplish a good developing process. You can check it here.


It is a set of activities that focuses on detecting and identifying faults and defects on the software. We are not perfect and tend to make mistakes, even if we perfectly follow the software quality assurance practices, defects are bound to sneak in our program anyways. SQC is oriented to detect these unwanted flaws so that the final product can shine as best as it can.

How can we apply Software quality control?

In order to ensure SQC in the project, testing and reviewing is what you want to do.

Unit testing, integration testing, system testing etc. are tests that will help detect if the program is running as intended and fully accomplishes its purpose. The better the tests the better we can be sure to have quality in the software. Something important to note is that even if the program passes all the tests, a code review is encouraged in order for us to be extra sure everything is okay.

You can read more about reviews and tests here!

Ensuring Software Quality

Now that we have seen what can we do to reach the stars and we embarked on the adventure, how do we know when we reach the goal? How can we be so sure the software created has quality? Like it was said in the beginning, software quality varies a lot between programs (various requisites, preferences, work ambience etc.) and people have different views of what the standard should be when talking about quality. That is why we have the ISO standards among others. Following the SQC and SQA in order to fulfill a standard will give quality to a program. You know you’ve definitely reached the goal when you and your customer are satisfied with the result!

Software Validation and Verification

Verification and Validation: People sometimes tend to confuse the definition of verification and validation, so i´ll explain it as simply as possible. Verification is making the things right! We have to check if what we are doing complies with our quality standards! Basically, if it works. Validation is making the right things! Here are some examples of V&V: Requirement Definition: Validation : Are these requirements what the client needs? Verification: Are the requirements doable? Are they unique? Don´t do what you´re not supposed to, do what the client needs, and has been defined and validated. V&V Planning What we need […]

The post Software Validation and Verification appeared first on AC Development.

Software Testing

Let’s get this out of the way as soon as possible: different methodologies have different software testing protocols and frameworks. That doesn’t mean that we can’t talk about software testing as a whole, it just means that we have to be careful to include what are arguably the most important aspects of it and talk about them to make sure that no one misses those.

First, let’s list those “most important” aspects of Software Testing. Again, different companies might seek different things from testing, but I consider these to be the basics:

  • What needs to be tested? You need to find and define which systems depend on your team, and test those. You might not be able to control some outer aspects of software, but you can make sure that your part works.
  • What is it supposed to do? Focus on your project’s requirements. Which characteristics and functionality is most important in your software? Once you answer that question, be sure to test them all.
  • Nice to haves: Just because something works, it doesn’t mean that it can’t be better. We all know the classic “if it ain’t broke, don’t fix it”, but remember, we’re not trying to fix what already works; we’re trying to improve on what’s already a good product. Non-functional requirements are sometimes head to head in importance with the actual functional requirements.

Some people might think that the third and final point might not be necessary, but believe me when I say, sometimes the only differences between your product and the competition are the nice to haves, and that’s a war you don’t want to lose.

It’s important to know what’s going to be tested before actually testing your software. If there’s no strategy, not only is your software quality going to suffer, but your team as well. A bunch of people with no idea of what they’re supposed to be doing or how they’re supposed to be doing it is a perfect recipe for chaos.

Once a plan has been defined, there needs to be a test design. There’s a set of test scenarios that are necessary to validate your system. Some might be as trivial as making sure that the date is being displayed correctly, and others might test complex communication between different modules in your system. It doesn’t matter how big or small each particular test might be, if it’s needed for a correct user experience, it needs to be passed. These core tests are gonna help us discover important errors in our program, not superficial defects.

Plans let us assess how good is good enough. Image taken from:

Then, we execute our tests. It’s common good practice to test small, basic functionality first, and finish off with the more complex tests. This way, if a basic part is malfunctioning, you’ll discover it before having to run a large test.

Last, but not least: results are gathered, analyzed, and evaluated to assess if the code is good enough for production, or if it needs some fine-tuning. Companies usually have a default percentage of tests that need to be passed in order for a piece of code to be considered “good enough” for production, and while those percentages might not be the same everywhere, they usually fall around the 90% mark. However, if any of the 10% of tests failed represents a critical malfunction in our code, it needs to be fixed before it ever sees the light.

That’s a general guideline of how testing should be done, but feel free to tweak it to your necessity. Again, you might need different things so you’ll probably end up doing things differently, but feel free to start off with the base described above.

Now, let’s talk about the actual tests. There’s a level system that makes it easy to categorize each test so that you know what it is that you’re actually testing.

  • Unit tests: these are your component by component tests. They test the core of the functionality of your program by making sure that all components work individually before even thinking about communication between them. This helps us know if our software does what it needs to do at a basic level.
  • Integration tests: once the core functionality is there, you need to test how they work with each other. You test communication between components and find errors in the communication interfaces so that you can fix them ASAP.
  • System tests: Once communication between components has been tested, we test the system as a whole. You communicate components in a way that it simulates real use and verify that every requirement is being met and is up to the established quality standards.
  • Acceptance tests: The final boss. You need to beat this to determine if your software is ready to see the light or not. Requirements are ever-changing, so you need to constantly validate if your software meets them thoroughly. If it does, great, it goes straight to production. If it doesn’t, you’ll have to work a bit harder before you can release it.

That’s it. Four levels. If this was a platforming game, that might be a piece of cake, but since we’re talking about software quality, this is harder than it looks and sounds. At least it will make sure that your code doesn’t hurt anybody. That’s a plus, right?

Different people inside the testing process execute different roles. There’s test leaders, who deal with the administrative side of testing (planning the tests, defining objectives and goals, etc.), and then there’s the actual testers, who gather the requirements provided by the leader and configure a proper testing environment to run the tests they developed based on the requirements given.

Testing environments are often controlled machines that are guaranteed to work and always execute in the same conditions. All supported applications, modules, and components are available in this testing environment, as well as a network connection if needed. Additionally, they often have specialized execution logging tools that make it easier to report bugs, and it’s in charge of generating test data that’ll be later analyzed to improve the program.

Testing can become difficult if you don’t know what you expect to get out of the tests you’re running. That’s why test case design techniques exist. You can test for both positive and negative input values that are always supposed to yield the same results. You could have a combination of positive and negative inputs, along with the different permutations to assess how the program handles mixed inputs. Finally, if you have people that analyze the data gathered from previous testing rounds, your team could predict future failures and test certain components more thoroughly to make sure that nothing breaks unexpectedly.

If a defect was to be found, it needs to be categorized, assigned, fixed, and verified by QA. If a defect is fixed properly, the transaction is then closed and results are reported to prevent it from happening again.

Tools for V&V


For this entry we’re making a little throwback to another post, this time I’m talking about V&V, given that there is already an entry here about V&V,  I’m going into the specifics tools that are used in V&V. For this topic, I'll divide the categories in three:

  • Tools for version control
  • Tools for testing
  • Tools for process administration of V&V

Version control


For version control tools we need a system that records changes to your code or files, with time in consideration so you can recall previous versions. The one that is most used is GIT. If you’re reading this you probably have worked on git already, but the main takeaway is:

  • Compatible with all operating systems
  • Distributed Systems that allow users to work on a project from various sources
  • Branching, a parallel line to the main sources files so you can work without negatively impacting the final project          
  • Fast & open source

Other tools for version control are: Mercurial, AWS CodeCommit, CVS


For testing tools, a frameworks that allows you to test your product in various environments is needed, one of the most used ones is Selenium, Selenium is a testing tool which allows has the next features:

  • Open source
  • Provides playback and record feature for tests
  • Record actions and export them in a script
  • Supports the following languages: C#, Java, Python, PHP, Ruby, Perl, and JavaScript
  • It supports the following operating systems: Android, iOS, Windows, Linux, Mac, Solaris.
  • And the following browsers: Google Chrome, Mozilla Firefox, Internet Explorer, Edge, Opera, Safari, etc.

Some of the alternatives to selenium are: 

  • Robot Framework.  
  • Cypress.  
  • Katalon Studio.  
  • Screenster.  
  • CasperJS.  
  • Watir.  
  • Cucumber.  
  • Ghost Inspector.

Process administration

In this category products are used to develop and satisfy management needs, from test cases to requirements. One of the most used in this category is Jira, some of its uses are:

  • Scrum boards
  • Kanban boards
  • Roadmaps
  • Agile Reports

And some of its alternatives are:

  • ClickUp.
  • Binfire.
  • Basecamp.
  • PivotalTracker.
  • Asana.
  • Clubhouse.
  • Trello.
  • ProofHub.

And that’s it, if you made to this point, Thank you for reading, I hope I can write soon, if not, its been a pleasure to write these, Have a good one everybody!

Tools for Version Control

Version control is a way to keep a track of the changes in the code so that if something goes wrong, we can make comparisons in different code versions and revert to any previous version that we want.

One of the most used tools is Git, which has amongst other, the next features:

  • Provides strong support for non-linear development.
  • Distributed repository model.
  • Compatible with existing systems and protocols like HTTP, FTP, ssh.
  • Capable of efficiently handling small to large sized projects.
  • Cryptographic authentication of history.
  • Pluggable merge strategies.
  • Toolkit-based design.
  • Periodic explicit object packing.
  • Garbage accumulates until collected.

As every tool, Git has some Pros and some Cons, let’s review them:


  • Super-fast and efficient performance.
  • Cross-platform
  • Code changes can be very easily and clearly tracked.
  • Easily maintainable and robust.
  • Offers an amazing command line utility known as git bash.
  • Also offers GIT GUI where you can very quickly re-scan, state change, sign off, commit & push the code quickly with just a few clicks.


  • Complex and bigger history log become difficult to understand.
  • Does not support keyword expansion and timestamp preservation.

Now let’s see what about CVS:


  • Client-server repository model.
  • Multiple developers might work on the same project parallelly.
  • CVS client will keep the working copy of the file up-to-date and requires manual intervention only when an edit conflict occurs
  • Keeps a historical snapshot of the project.
  • Anonymous read access.
  • ‘Update’ command to keep local copies up to date.
  • Can uphold different branches of a project.
  • Excludes symbolic links to avoid a security risk.
  • Uses delta compression technique for efficient storage.


  • Excellent cross-platform support.
  • Robust and fully-featured command-line client permits powerful scripting
  • Helpful support from vast CVS community
  • allows good web browsing of the source code repository
  • It’s a very old, well known & understood tool.
  • Suits the collaborative nature of the open-source world splendidly.


  • No integrity checking for source code repository.
  • Does not support atomic check-outs and commits.
  • Poor support for distributed source control.
  • Does not support signed revisions and merge tracking.

SVN is yet another tool for Version Control:


  • Client-server repository model. However, SVK permits SVN to have distributed branches.
  • Directories are versioned.
  • Copying, deleting, moving and renaming operations are also versioned.
  • Supports atomic commits.
  • Versioned symbolic links.
  • Free-form versioned metadata.
  • Space efficient binary diff storage.
  • Branching is not dependent upon the file size and this is a cheap operation.
  • Other features – merge tracking, full MIME support, path-based authorization, file locking, standalone server operation.


  • Has a benefit of good GUI tools like TortoiseSVN.
  • Supports empty directories.
  • Have better windows support as compared to Git.
  • Easy to set up and administer.
  • Integrates well with Windows, leading IDE and Agile tools.


  • Does not store the modification time of files.
  • Does not deal well with filename normalization.
  • Does not support signed revisions.

Finally, why should we use one of these? Well, it allows you to revert selected files back to a previous state, revert the entire project back to a previous state, compare changes over time, see who last modified something that might be causing a problem, who introduced an issue and when, and more. Using a VCS also generally means that if you screw things up or lose files, you can easily recover. In addition, you get all this for very little overhead.


Git – About Version Control. (n.d.). Git.

Top 5 BEST Version Control Software (Source Code Management Tools). (2020, November 13). Software Testing Help.

Tools for Software Quality

In this blog we have seen many different processes that help us ensure that our software is a quality software. But performing all of those processes manually without the help of any tool would end up being an enormous task. Thankfully there are multiple tools that help us perform those processes and we will talk about them today.

Tools for Version Control

For version control there are many different tools, but I will mainly be talking about Git and Github, which is the most popular one and the most used in the industry.

Before starting to talk about Git, let’s dive deeper into what is Version Control. Version control is the management of the code development process. It’s a way to manage the changes to the source code over time. It records each change and allows you to revert back to previous states that were recorded. As you can see this is very useful when developing, since if you made a mistake or prefer the way it was before you can easily just revert the changes. Version control also allows teams to work on a same program at once, since they can be working on the source code and the version control tools allow you to upload the changes and pull the changes if needed, and if there is conflict between what one member of the team did and another, then the version control tools allows you to choose between the two versions or to create a new one.


Git is a software tool that allows easy version control. It uses repositories as folders for the source code which can be divided into branch which can later be merged into a main one, that is the one deployed, while you can make experimental changes on other branches to later unite them with the main one or discard them.

One of the main advantages of Git is that since you can manage repositories locally even within a team, you can still have version control even if you don’t have a network connection at a time, and once you have it all the commits will be made after you have worked normally.

Another advantage is how since it’s so popular it’s really simple to implement and most people already know how to work with it, instead of other tools of version control. Also Git allows people to make pull requests, so you can kind of put changes on a waiting list until they are checked and if they are fine they merge into the main branch.

Tools for Testing

Selenium.- Selenium is one of the most popular tools for testing, which is a free open source automated testing framework. It’s mainly used to test web applications. It can use different languages like Java or C# to create Selenium Test Scripts which are ways to automate testing.

JUnit.- JUnit is another popular tool, it is a Java automation tool for unit tests mainly. It has been important for the evolution of TDD. JUnit allows to run tests continously and tells if they are succesful or not, also allowing to test multiple things at once. It’s a simple but effective tool which provides immediate feedback.

Tools for process administration

Trello. Trello is one of the most widely used in the industry, since it allows you to have a Kan Ban board for the development team, which is the basis for Agile Development, that way everyone is in the same page and can easily see in a visual way which tasks are immediate priority, which tasks are already done, who is working on which thing, etc. It’s a very useful tool that every software team should use.

Notion. Notion is a great tool that allows teams to have their own “wiki”, that way they can have the same notes, divide the information as needed, and have the resources at hand for everyone to use. It’s a way to have the information in a neat way about the software which is easy to access and modify.

Software Testing

Software testing is a process for the evaluation of the functionality of a software application with the intent of determining whether or not the software developed meets specified requirements and identifying defects to ensure that the product is free of defects so a quality product can be produced.

According to ANSI/IEEE 1059 standard – A process of analyzing a software item to detect the differences between existing and required conditions (i.e., defects) and to evaluate the features of the software item.

Why do we need Software Testing?

Because this allows us to ensure that our work is being done correctly and it is delivering the expected results, like, who wants a software that doesn’t work or that it doesn’t do what it is supposed to, after all the testing is the process or group of processes that checks how everything is working, every button, every page, every interface, it all needs testing.

And what if there is no software testing in the software development process?

Everything is changing rapidly and no one wants to be left behind, our lives are improving and we count on technology everyday, so think about this: we access our bank online, we do shopping online, we order food online, and many more, now what if these systems turn out to be defective. We all know that one small bug shows a huge impact on business in terms of financial loss and goodwill.

Some of the reasons why testing has become a very significant and integral part of the field of information technology are as follows:

  1. Cost-effectiveness
  2. Customer Satisfaction
  3. Security
  4. Product Quality

Testing Team

Test Manager: The test manager is hired when there are many testing groups. The number of testers and testing groups depends on the software testing workload. The test manager has below major roles:

  • Prepares test strategy
  • Prepare the test budget
  • Define test levels and test cycles
  • Develop strategy for estimating test effort
  • Develop strategy for test documentation, metrics and reporting
  • Guides and controls the testing teams

Test Leader: The test leader performs the roles of test manager in absence of test manager. In addition to that, test leader roles and responsibilities are listed below:

  • Prepare the test plan at each test level based on test strategy
  • Define the objectives, test items, approaches, risk, contingencies in testing process
  • Assign roles and provide schedule to testers
  • Identify the test specifications test activities for testers
  • Gather metrics and track the testing progress
  • Define entry and exit criteria

Testers (themselves): The testers group can comprise of entry level testers, senior testers, performance testers, automation testers and testers performing specific tests. Some of the responsibilities of a typical tester are listed below:

  • Gather the test requirements
  • Review the project documents to understand the requirements and identify the errors
  • Assist the test lead to prepare the test plan
  • Create the test documents like traceability matrix, test data and test cases
  • Set up and verify the test environment
  • Test the software at different levels and record the results
  • Identify, report and track the defects

Test Case Design Technique

Following are the typical design techniques in software engineering:

1. Deriving test cases directly from a requirement specification or black box test design technique. The Techniques include:

  • Boundary Value Analysis (BVA)
  • Equivalence Partitioning (EP)
  • Decision Table Testing
  • State Transition Diagrams
  • Use Case Testing

2. Deriving test cases directly from the structure of a component or system:

  • Statement Coverage
  • Branch Coverage
  • Path Coverage
  • LCSAJ Testing

3. Deriving test cases based on tester’s experience on similar systems or testers intuition:

  • Error Guessing
  • Exploratory Testing


Faculty, P. S. (2018, March 29). DIFFERENT ROLES IN A SOFTWARE TESTING TEAM. H2kinfosys Blog.

Test Case Design Technique – Tutorialspoint. (n.d.). TutoPoint. Retrieved November 30, 2020, from

R. (2020, November 28). What Is Software Testing | Everything You Should Know. Software Testing Material.