Tools for V&V

From earlier topics, we now know what V&V is and how can we execute it, but this world is all about making life easier,  and how else can you make life easier for developers other than give them tools to facilitate their job? These are tools that will help greatly in the verification and validation process.

Tools for version control

Version control has been very useful for every developer out there. Version control tools are systems that can record changes made in the code file or other files. These records can be consulted later to perhaps restore changes previously overridden or simply to check what changed from one version of the software to another.

Git

Git is one of the best version control tools that are available. I personally use it all the time when it comes to school and work related projects.

CVS

Another popular choice for version control, CVS works in a different way than Git but accomplishes the same goal. Unlike Git that works with commits, CVS keeps track of each file individually, each one of them having its own version key.

Mercurial

Written in python, mercurial is similar to Git in many ways, the key difference is how mercurial saves each commit. Git represents its commits as snapshots, containing in each one of them all the files in the repository, while mercurial represents them as diffs which means a commit will only store changed files. This is a better way to save space but it means Mercurial will be a little slower.

These tools are not the only tools in the market, there are a lot more but these are the most famous ones. If you want more information about these and more version control tools you can visit here

Tools for testing

Testing automation is very used everywhere in the world. This means that it inspired the creation of tools to make automation and other forms of testing a bit easier.

Selenium

Because it is a pain to test web apps in each separate browser to see if it works or displays correctly, Selenium is used to test web applications across various browsers and platforms. You can test in various programming languages and it is said that it supports some of the largest browser vendors in their projects.

Testing whiz

Another all-in-one useful tool, Testing whiz offers various automated solutions that range from web testing to optimization and automation. Although behind a subscription wall, this tool seems pretty useful to save time in testing procedures.

Like version control tools, there are a great number of tools for testing and you are welcome to experiment with different kinds. For more information about this tools you can visit here

Tools for process administration

There are many tools that can used in process adnministration. The main focus of this type of tools is to manage the administration parts of a project, this means user stories, requirements, releases, etc. We have surely heard about process administration tools before, for example monday.com, but there are tools that are aimed specifically at software developement.

Jira

Jira seems to be the most famous tool specifically defined for software developement management. Inside you can create user stories, manage product releases and view your team’s progress. You can use this tool for free up to 10 registered users, after that it will have a cost.

BaseCamp

Basecamp is also one of the more popular choices when it comes to software development management mainly because it can integrate plug-in services that lead to dropbox or anything you need really. The disadvantage of this tool is that it is behind a paywall.

For more information about process administration tools with advantage in software development you can visit here

These are examples of tools used to facilitate the application of verification and validation to the project.

Software Testing

Software testing is the act of testing… software… yeah it’s pretty obvious but the topic still has its trick. The goal of software testing is to provide enough information about what is being tested in order to meet certain requirements.

These requirements commonly being, but not limited to:

  • Meeting the requirements that guided its design and development
  • Responding correctly to all kinds of inputs
  • Performing its functions within an acceptable time
  • Being sufficiently usable
  • Able to be installed and run in its intended environments

Similarly to other topics in this blog, this topic has a process to follow, and this one has a number of steps that can be seen below.

Test Plan: We have seen this before in verification and validation, a test plan can be a document detailing what needs to be tested, how it will be tested, the objective of the test and what to do if the test fails.

Test Design: This will be defined in the test plan; the test design dictates the nature of the test. Will it be a manual test? Will it be automatic? How does it work?

Test execution: In the execution the test will be executed, important things to consider in this point are how much time you have for it and how much time does it take to complete.

Exit criteria: unexpected problems are common in tests; this is why they are here for. So you define what completes the test or completely halts it.

Test reporting: This step will take place once the test has been completed, how well did it go? Was the result satisfactory?

Manual vs automated tests

Manual tests and automated tests each have their disadvantages and advantages, and with a complex code or system the line that separates these 2 types of tests is more evident.

Manual testing is done by a human, giving inputs and receiving what the software returns. This has the advantage that a person is interacting directly with the product which will yield results that can be very specific right of the bat and it is a way of flexibility in the tests.

Automated testing in the other hand is a more robust and straightforward type of test. These are performed by the machine which was preloaded with the necessary instructions. They might not be as flexible and perhaps not be as visual as a manual test, but it is quicker to execute (since you only have to write it one time in order to test 1000 times), it is 100% safe from human errors and can provide quality results, depending on how well it was written.

With this in mind we can now see the various types of tests there are and how they are ordered:

Levels of testing

Unit Testing

During this first round of testing, the program is submitted to assessments that focus on specific functions, individual programs or components of the software to determine whether each one is fully functional. One of the biggest benefits of this testing phase is that it can be run every time a piece of code is changed, allowing issues to be resolved as quickly as possible. It is common for software developers to perform unit tests before delivering software to testers for formal testing.

Integration Testing

Integration testing combines all of the units within a program (most likely previously tested in unit testing) and test them as a group. This testing level is designed to find interface and comunication defects between the modules/functions. This is particularly beneficial because it determines how efficiently the units are running together.

System Testing

System testing is the first level in which the complete application is tested as a whole. The goal at this level is to evaluate whether the system has accomplished all of the requirements and to see that it meets the quality standards of the project.

System testing is done by testers who did not play a role in developing the program. This testing is performed in an environment that closely resembles production.

Acceptance Testing

The final level, Acceptance testing (or User Acceptance Testing), is conducted to determine whether the system is ready for release. During the Software development life cycle, requirements changes can sometimes be misinterpreted in a fashion that does not meet the intended needs of the users. During this final phase, the user will test the system to find out whether the application meets their business’ needs.

For more types of testing visit here: https://www.softwaretestinghelp.com/types-of-software-testing/

Activities and roles in testing:

We’ve seen till now that testing is quite the important task in order to maintain quality in the software, but another important aspect is the people in charge of the tests. There are quite some roles involved in the process of testing, each functioning differently from one another to cover the most aspects possible of the product.

The types of roles involved in testing are shown below:

QA Leader: The face and manager of the testing team; this position presents the connection between the team and whoever wants to contact it

  • Acts as a point of contact for inside and outside interactions, meaning it represents the entire team when dealing with customer relationships
  • Decides about test budget and test schedule, also manages resources and plans the testing process
  • Delegates the testing activities to the team
  • Makes status reports of the testing activities

Test Lead: Highly intelligent and wise, this position will provide great understanding of technical skills such as data management, test design and test development

  • Has technical expertise regarding test programs and how to approach them
  • Provides support for customer interface and will deliver progress status reports.
  • Validates the quality of testing requirements. (Testability, test design etc.)
  • Implements the test process
  • Ensures documentation is complete

Test engineer: This position’s job is to determine the best way to create a process to test a product in the most complete manner possible. There are different roles a test engineer can have, and its area of expertise will depend in which role it takes.

  • Usability Test engineer: Best suited for designing usability testing scenarios and has great understanding of usability issues.
  • Manual Test engineer: Great understanding of the GUI design and its standards. It is best suited for manually testing and attending test procedure walkthroughs.
  • Automated Test engineer: Great understanding of software testing and GUI design. It is best suited for working with test tools in order to build and execute automated tests.

Network Test engineer: Great understanding of databases, operating systems, programming languages and networking. They are best suited for integration testing since they know a little about everything.

Test library and Configuration Specialist: This position will be in charge of managing changes and version control.

  • Manages changes to the test-script
  • Maintains test-script version control
  • Creates test builds whenever required

Tester: This position has to be able to interact efficiently with the testing team, it will design scenarios that would require testing and will execute the tests as defined in the standards and procedures.

  • Designs the testing scenarios for usability testing
  • Analyzes test results and submits reports to the development team
  • Conducts testing as marked in the standards and procedures

Testing environments

We already know about the process of testing and the roles to be taken to accomplish the task, but we still need to know some stuff that will help testing greatly. This is testing environments.

A test environment is a combination of hardware, software, data and configurations which are required in order to test different test cases. These environments are commonly found in the shape of servers and virtual set ups which can be replicated. A testing environment is a very important tool to have at disposition if you want to have confidence in the test results and jump from saying “Well it works just fine in MY computer” to saying “It works on all operating systems with these minimum specifications”.

There are types of testing environments which focus on changing and configuring different aspects of the software:

Integration testing environment

In this environment different software modules are integrated to form a system, and then the test will proceed to verify its behavior.

Environment setup depends on the type of product that wants to be tested. It usually consists on ensuring availability of the right hardware, the right software version and the right configuration. The trick on this environment is that it should mimic production scenarios as closely as possible. This includes servers, databases or any other service required to be involved. This environment is focused in testing communication and functionality between all the components, which is why every other aspect of the environment is set to work in optimal conditions.

Performance testing environment

In this environment the system’s performance will be tested. This means that performance goals like concurrency, throughput, response time and stability will be put to test. A good performance testing environment place a crucial role in defining the bottlenecks in the system tested. What component is slowing down things or what thing breaks if too much stress is bestowed to it are questions this test environment will focus on answering.

The test will vary in the specifications of the hardware, amount of concurrency in the system and the volume of data managed.

This type of test is time consuming and expensive compared to other  tests, which is why it is recommended to run this tests only in required occasions, for example every major change in the system or once a month.

Security testing environment

Data protection Cyber Security Privacy Business Internet Technology Concept.

This type of testing environment focuses on finding security flaws and vulnerabilities. To create this environment organizations usually contract internal or external security experts that will try to identify any vulnerability in the software.

 A good security testing environment will try to be isolated from external sources and try to use mock data that is not in production.

Chaos testing environment

The chaos environment focuses in trying to make the product fail in any point of the execution and make that failure cascade into making the whole system fail. This is done by understanding the dependencies within the project. These environments are set just the same as a production environment and trying to combust the server into a state of error recovery phase, It  is kind of when schools make a fire drill to see the reaction of the students and their exit time. Because of the nature of chaos testing environment, it is executed as rarely as the performance testing environment. Very often they are tested alongside each other.

Test case design techniques

There are different types of techniques that are used to test the software’s functionality. These techniques can be categorized into 3 types

Specification based technique

Also known as Black-Box technique, this type of technique consists of testing based on defined specifications and test cases, this will provide the tests of this nature with a very wide coverage in test cases. One thing to note is that this technique does not use any information regarding the internal structure of the component or system to be tested.

Structured based technique

Also known as the white box technique, the structured based technique takes in consideration the internal structure of the tested components and it is used to derive test cases deemed necessary.

Experience based techniques

This technique will depend on the experience of the team using it. The knowledge and experience of people are used to derive test cases and it is more useful in identifying special tests not easily captured by formal techniques.

For more information you can visit this link

Process for control and management of defects in artifacts

The only thing left to see about software testing is how we can manage the defects we find with the previous knowledge. This is as simple as following some steps in order to classify the defect and resolving it.

Discovery

In the discovery phase, the project team tries to discover as many defects as possible before anyone else can discover them. A defect is said to be discovered and change to status accepted when it is acknowledged and accepted by the developers.

Categorization

Defect categorization helps the software developers to prioritize their tasks. There are types of categories as seen below and developers will categorize defects based on the effects they have on the product.

Resolution

Resolution will stand for the process of fixing the defects. Defect resolution process starts with assigning discovered defects to developers, then defects are fixed and developers send a report of resolution to the test manager.

Verification

After the development team fixed and reported the defect, the testing team verifies that the defects are actually resolved.

Closure

Once a defect has been resolved and verified, the defect is changed status as closed. If the defect has not been resolved the defect has to be checked again in the resolution step with the new observations.

Reporting

Test managers prepare and send the defect report to the management team for feedback on defect management process and defects’ status.

Software Review

Software Review is, as its name implies, a process in which people of different involvements in a project examine the software to see if it meets the necessary requirements of approval. Software review is also part of the life cycle of software development! So it is important for the project to pass through this phase in order to become stronger.

The reviews are actually done by multiple people each of them checking different things for different purposes, each of them checking different aspects of the product. We have project personnel, managers, users, customers, representatives and more giving feedback and giving signs of approval for the project.

Knowing about this topic will increase efficiency in the process of testing and validating the software’s functionality and behavior and It will improve productivity since potential errors are being detected early.

Types of reviews

Software review can be divided into three main types:

Software peer reviews

It is conducted by the creators of the software in order to evaluate the technical content and quality of the work. Checking quality of the software and finding potential errors and defects are the two main actions that take place in this type of review. Peer reviews actually have some subtypes that serve different purposes!

Code review: This is as simple as it gets, it’s a systematic examination of the computer source code.

Pair programming: In this type of peer review, two or more persons develop code together at the same workstation, and then they will review each other’s code.

Inspection: It is a form of formal peer review in which the person follows a well-defined process to find defects in the code.

Walkthrough: In this type, the creators of the software will lead other members of the team to go through the product; the ones viewing the product will act questions and make comments regarding defects that they may find.

Technical Review: It is a form of review in which qualified personnel other than the creators review the software product and see if they find defects regarding usability and problems with specifications and standards.

Software management reviews

They are conducted by a management team directly responsible for the project in order to evaluate the status of the product (how much has been done, what is NOT done yet) and will then direct a course of action depending on the project’s status and schedule. This type of review can be conducted by stakeholders.

If you are doing this type of review, remember to:

  • Check consistency with deviations from plans.
  • Check the adequacy of the management procedure.
  • Access Project Risks.
  • Evaluate the impact of actions and ways to measure those impacts.
  • Produce a list of action items and issues to be resolved and decisions made.
  • The retrospective is important.

Software audit reviews:

They are conducted by external people in order to evaluate if the project can achieve its imposed set of criteria, that being in terms of standards, specifications and agreements. The results of this review include observations, recommendations, corrective actions and a pass or fail assessment.

Software review steps

  1. Entry evaluation: The Review Leader uses a standard checklist of entry criteria to ensure that optimum conditions exist for a successful review.
  2. Management preparation: Responsible management ensure that the review will be appropriately resourced with staff, time, materials, and tools, and will be conducted according to policies, standards, or other relevant criteria.
  3. Planning the review: The Review Leader identifies or confirms the objectives of the review, organises a team of Reviewers, and ensures that the team is equipped with all necessary resources for conducting the review.
  4. Preparation: The Reviewers individually prepare for group examination of the work under review, by examining it carefully for potential defects, the nature of which will vary with the type of review and its goals.
  5. Examination: The Reviewers meet at a planned time to pool the results of their preparation activity and arrive at a consensus regarding the status of the document (or activity) being reviewed.
  6. Exit evaluation: The Review Leader verifies that all activities necessary for successful review have been accomplished, and that all outputs appropriate to the type of review have been finalised.

Reviewing key products

Reviewing the key work products is done through the different types of peer reviews listed above, of course each review checks different products.

Plans: These are checked in all types of peer reviews, checking omissions or inadequacies must be a part of the review process in any of them.

Requirements: There are checked in the walkthrough review only since the requirements are not imposed solely by developers.

Design: This is checked in the walkthrough and inspection review since the design should be following certain guidelines and should be getting feedback by a group of involved people.

Code: This is surely checked in all types of reviews, but it is only truly checked when the walkthrough review is executed since it is checking for standards and specifications.

For more information you can visit here!

Verification & Validation

Verification and Validation (or V&V in short) is the process of investigating and checking in which degree does software satisfy specifications and standards given because of the necessities of the project, and also makes sure it can successfully complete the tasks it was intended to do.

Verification and Validation are two completely different things

For more information about definitions and diferences you can check this link

Verification (Are we building the product right?): It is the process of making sure the software verified can do the tasks imposed without any bugs. This can be resumed in making sure it has no errors, no weird bugs and functions as it should.

Validation (Are we building the right product?): It is the process of making sure that the software can actually fulfill the requirements imposed to it. It is the process of checking that the software created is actually the software we want or need.

To verify and validate there is a common, pretty general but recommended procedure to follow.  Inside the software life cycle, the Verification has to be followed by the Validation. Meaning we first verify if our software actually works and then we validate that our software is actually useful. If we follow the steps of software creation in the image below we can see where is the verification and the validation taking place.

International standards for V&V of Software

To properly and effectively verify and validate your software there are very helpful global standards you would want to follow in order to reach a global acceptance in your procedures and quality of software. There is a great quantity of standards that will help in the application of V&V to any software (ISO 17029, IEEE 1012 and IEC being the most famous ones) each of them varying in methodologies to standardize the process of V&V.

Planning V&V

Having a chosen standard to have for your verification and verification processes is one thing, but another important aspect of V&V is the planning of implementation you will follow to integrate it to the software.

It is very important to have a plan for V&V at the very start of the development, and so it is recommended that you write down a verification and validation planning document containing the details about what is to be tested and how. This is called a master plan.

 Below there are some steps you can follow to build such document.

  1. Describe and set the objectives: Defining the overall project objectives, milestones and activities for teams to do is a key step in the start of the document.
  2. Define the components you are going to test: Listing the software features you’re going to test in the document will serve as a guide for everyone involved on what needs to be verified and validated. It is also important that you specify tests ranging from a small component to a cluster of them.
  3. Define the components you are NOT going to test: There may be instances of components in which testing is really expensive and not very rewarding, so it may be better to skip extensive testing for the components that seem of this type. Including them in the document will clear misconceptions and set on-point expectations about the process
  4. Define how can the components be verified and validated: Have a clear definition of what makes a test case pass and what makes it fail the verification and validation. What criteria they need to pass or what cases should they be able to manage to handle.
  5. Define procedures in case of failure: Often the components can fail some tests. It is important to settle what happens when we are presented with such condition. Can other tests be executed? Do we need to stop and adjust the component before continuing?

With these steps a solid V&V plan will be created ready to be followed, but executing a plan is always a totally different challenge than creating it. AS the old saying says “No plan survives the contact of with enemy”. Administrating a V&V plan will then be our next step.

Adopting a V&V plan

Adopting V and V on a project requires the development of a team and system to capture project requirements, ensure they are understood, and distributed to all staff. The process must be embedded in design and construction processes. There is little value in establishing a V and V team located separately, producing paperwork that remains unused.  Communication is critical.

The requirements list allows monitoring of progress against specifications and criteria. It provides site engineers with a tool to allow them to check the construction and provides a baseline against which changes are evaluated. Site engineers record the emerging evidence showing compliance with the requirements. This is done through site inspections, test reports, measurements, and photographs.

For more administration tips visit here

Models and Standards for Software Process Improvement

Software process improvement methodology is defined as definitions of sequence of tasks, tools and techniques which can be used to improve the process of creating software. There are a lot of models for software process improvement, but the better known ones will be shown in this entry along with its explanation and characteristics.

CMMI

If you want to start developing software for the US government, following the CMMI standard is actually required by rule. In general, this model will be of great use if you want to have an enterprise that develops software in the United States because it is very well known in the country.

Capability Maturity Model Integration (or CMMI in short) is a model administrated by the CMMI institute. This model is followed mainly by large companies that offer quality and business level products, which is why the government often requires this process of improvement in their contracts to get new software. CMMI’s main focus lays on improving risk management.

How it operates:

Capability Maturity Model Integration revolves around something called maturity levels, which are measured taking in account different aspects of whatever enterprise is being analyzed. The levels range from 1 to 5 and represent how well the company is managing its development process and how well are the personnel prepared to follow such process, as well as what must be improved to go up to the best level: level 5.

For each maturity level, there is a specific yet generic goal to be achieved in order to level up, which we can see in the picture above.

For more information in the meaning of each level and improvement steps, you can visit this page.

TSP/PSP

Team software process (TSP) and personal software process (PSP) are process models that help developing methods improve overall, emphasizing on process methodology and product quality, just like the CMMI, but with a key difference. These two are together because they intend to reach the same type of goal and tend to measure similar things like time of delivery of advances and productivity. The difference lies in to whom it is focused. TSP is more focused for teams and groups of people (not as massive as CMMI) and PSP is focused on individuals, tending to check on discipline and skills.

ISO-15504

The international standard ISO-15504 stands for Software Process Improvement Capability Determination (also known as SPICE). This standard model evaluates the process capacity of software products, just like CMMI or TSP/PSP. Getting the certification for ISO-15504 may as useful as CMMI in United States, but for other places like Europe both have more-less equal value. 

ISO-15504 revolves around 3 main focuses to improve or judge which are:

  • Process evaluation
  • Process improvement
  • Process capacity or maturity

In order to test for these 3 main objectives, the model also classifies the state of the company or enterprise in a rank of maturity levels, as well as another chart of capacity levels.

Process capacity evaluation

This evaluation tries to rank the total capacity of a series of processes used by the evaluated company in order to make products (How good is the company’s methods on making quality products). This evaluation is ranked in 5 levels:

  • Level 0 – The process is incomplete
    • Not correctly implemented and does not achieve its objectives.
  • Level 1 – The process works
    • It is implemented and can reach its objectives.
  • Level 2 – The process is managed
    • The process is controlled and its implementation is planned, monitored and adjusted. The results are established, registered, controlled and mantained.
  • Level 3: The process is established
    • The process is documented in order to guarantee the accomplishment of objectives.
  • Level 4: The proces is predictable
    • The process operates accodring to defined performance targets.
  • Level 5: The process is optimized
    • Continously improves to help meet current and future goals.
Process maturity evaluation

This evaluation tries to rank how well-organized and effective a company is to identify, improve and innovate in order to continuously improve the quality of the products. (How good is the company itself on improving and making quality products). This evaluation is ranked in 5 levels:

Level 1: Initial

The organization does not have formal procedures for the evaluation, development and evolution of its applications. When the failure materializes, the possible fundamentals of the method are abandoned to try shortcuts in the realization and validation process. Organizational efforts then return to purely reactive engagement practices, such as “coding and testing,” which amplify the drift.

Level 2: Reproducible

The management of new projects is based on the experience stored in similar projects. The permanent commitment of human resources guarantees the durability of knowledge within the limits of its presence within the organization.

Level 3: Defined

Project management guidelines and procedures are established to enable implementation. The standard software development and evolution process is documented. It is integrated into a consistent comprehensive software engineering and project management processes. A training program has been implemented within the organization to ensure that users and IT professionals acquire the knowledge and skills necessary to take on the roles assigned to them.

Level 4: Managed

The organization establishes quantitative and qualitative objectives. Productivity and quality are evaluated. This control is based on the validation of the main milestones of the project as part of a planned program of measures.

Level 5: Optimized

Continuous process improvement is the main concern. The organization gives itself the means to identify and measure weaknesses in its processes. Seeks the most effective software engineering practices, especially those whose synergy enables continuous quality improvement

For more information about ISO-15504 you can visit here

MOPROSOFT

Inspired by ISO-15504, MOPROSOFT is actually a creation by the Mexican Software Engineering Quality Association (AMCIS)!

Differentiated from ISO-15504 and CMMI, MOPROSOFT is a model designed to consider enterprises that may not be as big as other well-known companies like Microsoft, but would like to achieve global levels of quality in their products. It takes in account the aspects and environment of small and medium companies so the requirements and evaluation level achieved can actually talk in more detail about the state of the company.

Moprosoft divides itself in 3 different evaluation levels that control different aspects of the company:

Direccion (Direction): This level focuses efforts on the company to apply strategic planning and promote an optimal operation.

Gerencia (Management): This level focuses on improving the management of processes, projects and resources.

Operacion (Operation): This level focuses on specific processes of project administration as well as development and maintenance of software.

 For more information about MOPROSOFT you can follow this link!

IDEAL method:

The IDEAL model, created by the Technology Adoption Architectures Team, is named after the five phases that conform it: Initiating, Diagnosing, Establishing, Acting and Learning. Additionally, these phases are further divided into fourteen activities which can be seen in the image below. It is perhaps the most flexible of all the other models mentioned above and like all the previous models, it serves as a roadmap for improvement of the company it is applied to.

The Initiating Phase Critical groundwork is completed during the initiating phase. The business reasons for undertaking the effort are clearly articulated. The effort’s contributions to business goals and objectives are identified, as are its relationships with the organization’s other work. The support of critical managers is secured, and resources are allocated on an order-of-magnitude basis. Finally, an infrastructure for managing implementation details is put in place.

The Diagnosing Phase The diagnosing phase builds upon the initiating phase to develop a more complete understanding of the improvement work. During the diagnosing phase two characterizations of the organization are developed: the current state of the organization and the desired future state. These organizational states are used to develop an approach for improving business practice.

The Establishing Phase The purpose of the establishing phase is to develop a detailed work plan. Priorities are set that reflect the recommendations made during the diagnosing phase as well as the organization’s broader operations and the constraints of its operating environment. An approach is then developed that honors and factors in the priorities. Finally, specific actions, milestones, deliverables, and responsibilities are incorporated into an action plan.

The Acting Phase The activities of the acting phase help an organization implement the work that has been conceptualized and planned in the previous three phases. These activities will typically consume more calendar time and more resources than all of the other phases combined.

The Learning Phase The learning phase completes the improvement cycle. One of the goals of the IDEAL Model is to continuously improve the ability to implement change. In the learning phase, the entire IDEAL experience is reviewed to determine what was accomplished, whether the effort accomplished the intended goals, and how the organization can implement change more effectively and/or efficiently in the future. Records must be kept throughout the IDEAL cycle with this phase in mind.

For more information about the IDEAL model you can follow this link

Software Quality

Software quality is an aspect of software developing that cares about how well software is designed for its intended purpose and how well the requirements and functionalities are followed. This means that the better a code’s requirements are satisfied, the better quality it will have.  But how can we know that a program is of great quality? The definition seems pretty vague on how we can achieve this state because either the code does its intended purpose or not, right? Quality is very dependent on the project. What is defined as a great quality in some program may not necessarily mean the same for others, but we can still get some common traits that we can list

Software Quality Factors

These factors are desirable common traits a piece of software must have for us to be able to say it has quality. Like software quality, the number of factors and the importance of each one are not set in stone. Various people from various times added or removed some of them; therefore there are a lot of quality factors that you can take in consideration for your project, however the factors will not be the absolute solution to determining the quality of software; even if these factors exists, some of them are subjective or hard to have an exact measurement, so we can’t really say with precision how good a program state is, but at least we can have a close enough idea.

This is a list the most remarkable ones according to ISO 25000 standards. As of today, ISO 25000 is the beholder of the current standard of factors to evaluate software.  There is also McCall’s and Boehm’s Quality models which were used before ISO 9126 quality characteristics (which was later replaced by ISO 25000)

Functional Suitability: How well a system functions and completes the needed tasks when used under specified conditions.

Performance efficiency: This characteristic represents the performance relative to the amount of resources used under stated conditions. This characteristic is composed of the following sub-characteristics:

Compatibility: How well a system can exchange information with other systems if necessary

Usability: Degree to which a product or system can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.

Reliability: Degree to which a system, product or component performs specified functions under specified conditions for a specified period of time.

Security: Degree to which a product or system protects information and data so that persons or other products or systems have the degree of data access appropriate to their types and levels of authorization.

Maintainability: This characteristic represents the degree of effectiveness and efficiency with which a product or system can be modified to improve it, correct it or adapt it to changes in environment, and in requirements.

Portability: Degree of effectiveness and efficiency with which a system, product or component can be transferred from one hardware, software or other operational or usage environment to another.

For more information about these standards and ISO 25000 click here!

Software Quality control and assurance

This post has covered what software quality is and has determined where we have to aim to reach a quality state. But how do we get there? How do we shoot to the stars? Do we have to guess? Well, luckily for us, there are two very useful set of activities that we can follow to ensure quality in our software. These very convenient sets are called Software quality control (SQC) and Software quality assurance (SQA), both taking in count the previously seen factors.

SQA

It is a set of activities that focuses on establishing and evaluating the process of creating software. As it sounds, SQA focuses on preventing software defects to happen in the first place by organizing the workflow. SQA will be present all the way from how the software requirements are stated and managed all the way to testing and producing. Yes, Software quality assurance is basically telling you to follow a good development life cycle.

How can we apply Software quality assurance?

Project management! Software design! Software development’s good practices! All these are your allies. Software quality assurance will be present when you try to follow a good development life cycle.

SQA focuses not on the product or software itself, but in the process of making it follow the requirements and meet specifications. ISO 9000 offers a very good quality management system in order to accomplish a good developing process. You can check it here.

 SQC

It is a set of activities that focuses on detecting and identifying faults and defects on the software. We are not perfect and tend to make mistakes, even if we perfectly follow the software quality assurance practices, defects are bound to sneak in our program anyways. SQC is oriented to detect these unwanted flaws so that the final product can shine as best as it can.

How can we apply Software quality control?

In order to ensure SQC in the project, testing and reviewing is what you want to do.

Unit testing, integration testing, system testing etc. are tests that will help detect if the program is running as intended and fully accomplishes its purpose. The better the tests the better we can be sure to have quality in the software. Something important to note is that even if the program passes all the tests, a code review is encouraged in order for us to be extra sure everything is okay.

You can read more about reviews and tests here!

Ensuring Software Quality

Now that we have seen what can we do to reach the stars and we embarked on the adventure, how do we know when we reach the goal? How can we be so sure the software created has quality? Like it was said in the beginning, software quality varies a lot between programs (various requisites, preferences, work ambience etc.) and people have different views of what the standard should be when talking about quality. That is why we have the ISO standards among others. Following the SQC and SQA in order to fulfill a standard will give quality to a program. You know you’ve definitely reached the goal when you and your customer are satisfied with the result!