Software Testing

Let’s get this out of the way as soon as possible: different methodologies have different software testing protocols and frameworks. That doesn’t mean that we can’t talk about software testing as a whole, it just means that we have to be careful to include what are arguably the most important aspects of it and talk about them to make sure that no one misses those.

First, let’s list those “most important” aspects of Software Testing. Again, different companies might seek different things from testing, but I consider these to be the basics:

  • What needs to be tested? You need to find and define which systems depend on your team, and test those. You might not be able to control some outer aspects of software, but you can make sure that your part works.
  • What is it supposed to do? Focus on your project’s requirements. Which characteristics and functionality is most important in your software? Once you answer that question, be sure to test them all.
  • Nice to haves: Just because something works, it doesn’t mean that it can’t be better. We all know the classic “if it ain’t broke, don’t fix it”, but remember, we’re not trying to fix what already works; we’re trying to improve on what’s already a good product. Non-functional requirements are sometimes head to head in importance with the actual functional requirements.

Some people might think that the third and final point might not be necessary, but believe me when I say, sometimes the only differences between your product and the competition are the nice to haves, and that’s a war you don’t want to lose.

It’s important to know what’s going to be tested before actually testing your software. If there’s no strategy, not only is your software quality going to suffer, but your team as well. A bunch of people with no idea of what they’re supposed to be doing or how they’re supposed to be doing it is a perfect recipe for chaos.

Once a plan has been defined, there needs to be a test design. There’s a set of test scenarios that are necessary to validate your system. Some might be as trivial as making sure that the date is being displayed correctly, and others might test complex communication between different modules in your system. It doesn’t matter how big or small each particular test might be, if it’s needed for a correct user experience, it needs to be passed. These core tests are gonna help us discover important errors in our program, not superficial defects.

Plans let us assess how good is good enough. Image taken from: https://blog.soton.ac.uk/servicemanagement/files/2015/06/Poor-to-Great1.jpg

Then, we execute our tests. It’s common good practice to test small, basic functionality first, and finish off with the more complex tests. This way, if a basic part is malfunctioning, you’ll discover it before having to run a large test.

Last, but not least: results are gathered, analyzed, and evaluated to assess if the code is good enough for production, or if it needs some fine-tuning. Companies usually have a default percentage of tests that need to be passed in order for a piece of code to be considered “good enough” for production, and while those percentages might not be the same everywhere, they usually fall around the 90% mark. However, if any of the 10% of tests failed represents a critical malfunction in our code, it needs to be fixed before it ever sees the light.

That’s a general guideline of how testing should be done, but feel free to tweak it to your necessity. Again, you might need different things so you’ll probably end up doing things differently, but feel free to start off with the base described above.

Now, let’s talk about the actual tests. There’s a level system that makes it easy to categorize each test so that you know what it is that you’re actually testing.

  • Unit tests: these are your component by component tests. They test the core of the functionality of your program by making sure that all components work individually before even thinking about communication between them. This helps us know if our software does what it needs to do at a basic level.
  • Integration tests: once the core functionality is there, you need to test how they work with each other. You test communication between components and find errors in the communication interfaces so that you can fix them ASAP.
  • System tests: Once communication between components has been tested, we test the system as a whole. You communicate components in a way that it simulates real use and verify that every requirement is being met and is up to the established quality standards.
  • Acceptance tests: The final boss. You need to beat this to determine if your software is ready to see the light or not. Requirements are ever-changing, so you need to constantly validate if your software meets them thoroughly. If it does, great, it goes straight to production. If it doesn’t, you’ll have to work a bit harder before you can release it.

That’s it. Four levels. If this was a platforming game, that might be a piece of cake, but since we’re talking about software quality, this is harder than it looks and sounds. At least it will make sure that your code doesn’t hurt anybody. That’s a plus, right?

Different people inside the testing process execute different roles. There’s test leaders, who deal with the administrative side of testing (planning the tests, defining objectives and goals, etc.), and then there’s the actual testers, who gather the requirements provided by the leader and configure a proper testing environment to run the tests they developed based on the requirements given.

Testing environments are often controlled machines that are guaranteed to work and always execute in the same conditions. All supported applications, modules, and components are available in this testing environment, as well as a network connection if needed. Additionally, they often have specialized execution logging tools that make it easier to report bugs, and it’s in charge of generating test data that’ll be later analyzed to improve the program.

Testing can become difficult if you don’t know what you expect to get out of the tests you’re running. That’s why test case design techniques exist. You can test for both positive and negative input values that are always supposed to yield the same results. You could have a combination of positive and negative inputs, along with the different permutations to assess how the program handles mixed inputs. Finally, if you have people that analyze the data gathered from previous testing rounds, your team could predict future failures and test certain components more thoroughly to make sure that nothing breaks unexpectedly.

If a defect was to be found, it needs to be categorized, assigned, fixed, and verified by QA. If a defect is fixed properly, the transaction is then closed and results are reported to prevent it from happening again.

Work smarter, not harder

Quick recap before we begin this:

  • Verification answers the question “are we building the product right?”
  • Validation answers the question “are we building the right product?”

End of recap.

Now that we know what V&V stands for, we can focus on the tools to aid the V&V process. You see, you can’t just go off and do whatever you feel like doing. Even when software development is often a somewhat chaotic process, there’s a bit of organization going on. We’ll talk a little bit about the tools that we use to turn this seemingly chaotic process into something a little more organized.

Version Control Systems

First, there’s Version Control Systems. These help us make sure that we’re building the product right, and if we screw up, this tool allows us to go back in time and pretend that never happened. If you’re a programmer, you’re most likely familiar with Version Control already. If not, or if you’re new, let me explain it to ya. Version Control Systems record changes to a file or set of files over time to make it easier for you if you ever need to go back to a specific version later in the project lifetime.

  • Local Version Control System: This is what I like to call a self-managed version control, because it’s basically just you creating a bunch of copies in different directories. If you’re somewhat organized, you might even include a timestamp to identify them faster. Problem is, they’re really, really error prone. If you ever modify a file in the wrong directory, you just lost a version, and if you’re like me, who just lets VSCode auto-open the folder you’re working on, this is a real problem. Also, this only works if you’re a one-man-team. If you’re part of a team, this is absolutely impossible to implement unless everyone works on the same computer.
Local Version Control System. Screenshot owned by me, taken for demonstrative purposes.
  • Centralized Version Control System: So, in order to deal with the whole issue of working in teams, CVCSs became a thing. These systems have a single “central” working copy of the project in a centralized server, and programmers grab a version, modify it, and commit the changes back to the central copy. If another programmer modified the files while you were working on them yourself, you can pull their changes and merge them before committing them back to the server. As you can probably imagine, this made it a lot easier and faster to deal with multiple people working on the same file at the same time.
Centralized Version Control System. Image taken from: https://miro.medium.com/max/700/1*GgaGcwh5L246YcU5NVDA5A.png
  • Distributed Version Control System: This is like an iterative upgrade over CVSs. With DVCSs, you don’t only have a working copy of the project, but a copy of the whole repository (a “clone”) in your storage drive with the whole metadata of the original. This means copying the project is heavier, but not so much as to call it wasteful (we’re mostly working with text files after all), and it brings a bunch of benefits with it. You can easily go back to a different version of a file locally instead of contacting the central server each time you need to roll back, and you can more confidently commit changes to your local copy without worrying about breaking the main copy. If changes were pushed to the central copy, you “pull” all the repository changes and history, and if you want to commit to the central copy, you “push” your changes to make them visible and accessible to everyone in your team.
Distributed Version Control System. Image taken from: https://miro.medium.com/max/700/1*CEyiDu_mQ5u9NI0Fr2pSdA.png

By now, you can probably see how useful Version Control Systems are. They help us keep track of all the changes and modifications we’ve made to each file inside a project, and they allow us to go back if we ever screw up (which we often do), so if you’re not already using a Version Control System, you should probably consider starting to do so. I personally prefer Git, but you could prefer something else.

Tools for testing

What are computers for, if not to make our lives easier? I mean, we’re programmers, so we’re technically in charge of making computers useful for everyone, but that doesn’t mean that we can’t spoil ourselves every once in a while by making tools for ourselves.

Testing, no one likes it (unless maybe if you’re a video game tester, since you get to play the game before anyone else, but even then you have to deal with early bugs and glitches). The solution to this? automation. Why are you still testing your software manually, when a computer can make your life easier? You’re a programmer, you should know better. The less time you have to spend making sure that your code is correct, the more time you can spend stressing about your code not working the way you want it to. That’s why testing automation tools became a thing. Again, these tools help us make sure that we’re building the product right by making sure that everything’s working as it should.

There’s a bunch of test automation tools available. Now, I won’t go into too much detail on these because I’ve never actually used them myself (except for Postman, I heavily depend on Postman when creating web-based applications to test my endpoints. Postman is great, I heavily recommend it). Now, even if I haven’t used them myself, I still recommend that you check them out and try to find one that suits your needs. Test automation is especially useful once a project gets bigger and you can’t manually keep track of everything that’s going on. You can find more information on these tools here, here, and here.

Process administration tools

Process administration is arguably the second most important part of software development (after version control, which is a lifesaver). Process administration is how you keep track of who is doing which task, and how fast they’re working, so they help us make sure that we’re building the right product. It’s not only useful for managers who are looking to keep track of what everyone’s doing, but also for developers who might need help from someone who’s working on another piece of the project. These tools make it easier to know what everyone’s up to without having to explicitly ask them what they’re doing, and if you’re an introvert like me, the less social interactions the better.

Again, there’s a bunch of tools available, and we won’t talk about them all here. The main idea of this blog post is to let you know that they exist, and that they’re actually pretty useful.

Jira is arguably the most popular tool for this. It’s a software management tool that is used by a lot of big companies. It’s very, very robust, and it’s easy to implement in a workspace. Deep integration into the development process, compatibility with other management tools, and a bunch of features that aid the management process are some of the features that attract companies to Jira over their competitors.

Jira dashboard. Image taken from: https://www.atlassian.com/es/software/jira

Notion is another popular tool that serves more as an AIO workspace. Additional to management features, it can also help with design and productivity, and it supports Markdown Language, which is nice, since most developers are already used to Markdown.

Personally, when I just need a basic tool for managing tasks and tracking progress, I usually go for Trello. It doesn’t have a lot of features, but it gets the job done. If you don’t need anything fancy or deep integration with other tools, you can always opt for this, especially when you don’t have money to pay for new tools.

Remember, always try to find new ways to aid your work process. The less manual work you have to do, the more you can focus on actually being productive. Thinking and coming up with ideas takes time, and we can never take time for granted. Always remember that in order to be more productive, you have to learn how to work smarter, not harder.

Two heads are better than one, right?

There’s this common conception that two heads are better than one, which comes from the intuition that people working in groups are more likely to come up with the correct answer or decision faster/more often than if they were working alone. You see, when you get a bunch of people together and ask them to come up with an idea, it’s highly unlikely that they all have the same idea, right? so there’s an inherent process of brainstorming when people work in organized groups, because everyone has their own thought process and has their own ideas. Parting from there, each individual will either defend their idea, or adopt another person’s idea if it sounds like a better choice. More often than not, the surviving idea or decision will be correct because the arguments favoring that decision are often the most sound.

Two heads are better than one. Image taken from: https://www.creativeintellectuk.com/2015/wp-content/uploads/TwoheadsPartnership.jpg

Okay, sure, that sounds nice and everything, but you already know we’re here to talk about software quality, so what does that have to do with software? Well, it just so happens that software development is a process filled with decisions (button placements, text field vs dropdown menu, algorithm X vs algorithm Y, while vs do-while, etc.), so maybe this conception also applies to software development… and it turns out that it kinda does?

Code review (which we often call peer review) is a software quality assurance activity where someone checks another developer’s code while it is being written. The idea is that there’s a coder and a reviewer, and the reviewer will ask questions about the code and provide feedback on it after a piece of it is written, or sometimes while it is being written, if the reviewer thinks it is necessary (e.g. when the coder is going in a completely wrong direction). This process often results not only in the discovery of quality problems, but also better overall code quality (in most aspects), more defects are found early, more people know how a particular piece of code works (if it needs maintenance and the author isn’t there, someone else can take care of it), increased sense of collective code ownership, better solutions to problems, and more/better QA guideline compliance.

Sounds cool, right? Well, if you like the feeling of someone constantly looking over your shoulder, that is. Developers often prefer sticking to a more toned-down version of peer review, where we ask a teammate to help us review our code only when we encounter a problem, and not through the whole process. Now, since the other person was not involved in the whole development process for this piece of code that you just wrote, there’s often a brief rubber ducking process where you explain your code to your peer, and this also helps the developer clear their mind and look at the bigger picture, which can help them come up with better/faster solutions.

Of course, this process of constant feedback doesn’t only help through the coding process. Thing is, two heads are better than one for pretty much anything: planning, requirements, design, etc. You name it, two heads can do it better (probably). The thing is, what helps us the most is the constant feedback that we get from working closely together. Whether it’s planning, designing, or coding, you most likely will benefit from having different points of view. Having a larger perspective often results in taking more things in consideration, which means that you’ll probably encounter less problems along the way.

In conclusion: yes, two heads are better than one.

Verification and validation are not the same thing

Wait, what? They’re not the same?! You read me right. Verification and validation are not the same. They might be pretty similar, but they’re definitely not the same thing, and I’ll show you a quick little trick so that you can identify which is which.

Always ask yourself these two questions when developing a project:

  • Are we building the product right?
  • Are we building the right product?

If the answer to the first question is yes, then congratulations, your product is verified. If the answer to the second question is also yes, then your product is also validated. Let’s quickly review what each means.

Verification is the process of evaluating our code to make sure that the results of any given development phase satisfy the conditions that were determined at the beginning of said phase. This helps us make sure that we are always on the right track by making sure that the program is always working as intended.

Verification and validation. Taken from https://gqsystems.eu/files2/gallery/2016/12/validation.png

Validation, on the other hand, is the process of evaluating our code to make sure that it satisfies the business requirements of the project. While verification makes sure that the program is working as intended at any given phase, validation makes sure that the program is doing what the client wants it to do. This can be done internally, or externally. If done internally, the team compares the functionality of the program with the provided requirements. If done externally, the client, partners, and stakeholders decide whether or not the program is doing what it’s intended to do.

Now, knowing the difference between verification and validation is not enough. You’ve got to apply this knowledge to your projects. Software quality assurance is no joke. Software is a fragile thing, and if we don’t take good care of it, it will break. Constant verification and validation is crucial to make sure that the quality of our projects is up to a certain standard.

Speaking of standards, there’s a bunch of them for V&V. There’s ISO/TS 17033, ISO 14021, IEC 17029, IEEE, ISVV, and a bunch more, but we won’t go into details about those. What we will describe into detail, is the process of planning and administrating V&V.

In order to properly verify and validate your software, you have to follow a plan. Once again, I know I’ve said this countless times, but there’s no one-size-fits-all approach to anything software related, so you have to make sure that your plan fits the needs of your particular project.

  • First, you need to describe what your team will do and when they’ll do it. You also need to make sure that everyone understands what is going on by thoroughly explaining the concepts in your V&V plan.
  • Then, you need to decide which components are going to be tested. Once the components are decided, you now have to explain how you’re going to both verify and validate them. Keep in mind that there are often a lot of small components in a project, so you need to make sure that they all work individually.
  • Next, you’ll create functional tests. This step is similar to the previous, but this time you’ll be testing bigger pieces of code to make sure that those small components that you tested individually now work when put together. Components in code are like members in a team, if one of them doesn’t work, then the whole team fails and needs to assess what is going on.
  • Now we have acceptance rate. Does the code work? How well does it work? If the answer is not 100%, is it good enough to make it into the main branch and be fixed later, or is it a deal breaker? It’s not only a matter of passing or not. You need to evaluate how well they passed.
  • Finally, you’ll process and analyze the results yielded by the previous step and evaluate how good your team is doing. There’s always room to improve, so you need to be constantly checking these results to see what changes could be made so that the team performs better.

Of course, having a static plan is not good enough. The industry changes, software changes, best practices change, and so should our plans. Some basic administration skills are needed for the final stage of verification and validation, but it’s fairly easy once you get the hang of it.

  • Make sure that test results are documented in a concise way. The harder these results are to read, the harder it’ll be for your team to find where it’s lacking.
  • Components should be tested in an optimal order. If one component uses another, be sure to check the other one first. This way, it’ll be faster and easier to find errors, thus making them easier to fix.
  • Documentation is important guys. Make sure to explain what each test is testing and why it’s testing that particular thing. Remember, data that can’t be read or interpreted is just a bunch of garbage.
  • Always take the bigger picture into account. Different people will use your software differently, so make sure to take that into consideration when testing your programs.
  • Again, documentation is key. This time, you won’t be documenting the tests’ purposes, but the results yielded by said tests. Make sure that they’re nice and tidy so that anyone who comes across these results can interpret them and have a general idea of what’s going on in your team.

Software quality is often undervalued and underestimated, but truth is, the world would probably fall into chaos if we didn’t thoroughly test our code before releases. Next time you see your buddies at QA, hug them and tell them how much you appreciate them. After all, if it weren’t for them, our code would probably be a worthless piece of de-optimized trash.

Models and Standards for Software Process Improvement

Hello there (General Kenobi). Welcome back to my blog posts. Grab a drink, grab your snacks, ’cause today we’ll be talking about models and standards for software process improvement (woo-hoo?). I’ll try to keep it short, but I can’t promise you anything. In fact, it’s probably gonna be a rather long one.

General Grievous saying General Kenobi. Taken from https://gfycat.com/freshgleamingfulmar-quotetheguy-definition-highdef-movie

So uh, what are these models and standards all about? Well, they’re supposed to improve the way we develop software by planning and implementing new activities that are designed to achieve specific goals set by the company you work for. Some of these goals include, but are not limited to:

  • Increasing product quality
  • Decreasing development/maintenance costs
  • Increasing development speed

Now, as you may very well know, there’s no such thing as a “one-size-fits-all” approach to anything software related, so of course, there’s a bunch of people and organizations that have different opinions and different ideas on how we can improve the way we create software. In this blog post, we’ll be talking about a couple of them and see how they stack up against each other.

CMMI

The Capability Maturity Model Integration is a set of “best practices” that measures and improves a team’s performance through a process level improvement training and appraisal program. This model will help you build, improve, and measure the capabilities of your team, increasing your overall performance in the process.

CMMI categorizes organizations based on what they call “maturity levels”. The higher the maturity level, the better. Each level increase is built on top of the previous one by the addition of one piece of functionality.

  • Level 1: The organization is unpredictable and reactive. It gets the job done, but going over budget and delays are relatively common. Basically your average college student.
  • Level 2: Projects go through planning before doing things, and performance is being measured and controlled, and the deliveries go through some sort of quality assurance.
  • Level 3: Projects and programs are guided through an organization-wide standard. Decisions are analyzed before being executed, and risk management plays a bigger role now. The team now prevents things going wrong instead of reacting when something goes downhill.
  • Level 4: The organization gathers enough quantitative performance data to become data-driven. This results in a measured and controlled environment where objectives are predictable and the expectations of internal and external stakeholders are constantly achieved.
  • Level 5: The organization performs at a stable rate, and is basically unaffected by change thanks to its flexibility, which helps it pivot rapidly and respond to any opportunity that may present itself.

However, this is not something that’s tailor-made for software developers, so it may not be able to tackle some of the things that hold your team back, but it is a great option for companies that have multidisciplinary teams that need to improve.

TSP/PSP

Personal Software Process is tailor-made for software developers, providing a structured process that is designed to help us better understand and improve our performance. Estimation skills improve as a result of learning to make promises that are actually achievable based on real data that helps justify decisions, quality standards are better managed, and development time, defects, and size are measured and analyzed, reducing the faults and imperfections in our code.

Relationship between PSP and TSP. Taken from http://bluit.mx/img/madurezen.png

There’s a structure to integrate PSP into your process:

  • PSP0, PSP0.1: Planning, developing, and post mortem are the three phases of PSP0. The engineer gathers data during these phases (time spent programming, faults injected and removed, program size), and ensures that everything was properly measured in the post mortem phase. PSP0.1 improves over PSP0 by adding a coding standard and an improvement plan.
  • PSP1, PSP1.1: Based on the data collected in PSP0 and PSP0.1, PSP1 now prompts you to estimate how large a new program will be and prepares a test report. Previous data is used to better estimate the time it should take to develop this new project. Each new project will follow PSP0 and record data accordingly so that we can get progressively better at estimating and planning schedules in PSP1.1.
  • PSP2, PSP2.1: PSP2 adds design and code review to the equation. Defect prevention and their removal are the main focus for this stage, making engineers learn to evaluate and improve their process by measuring how long it takes them to execute certain tasks and how many defects they inject and remove in each of the development phases. PSP2.1 introduces design specification and new analysis techniques to improve what was applied in previous stages.

Team Software Process basically means that every programmer in a team is PSP-trained so that the project is managed by the team itself. Each member gathers personal performance data and the whole team plans, estimates, and controls the quality of the software produced based on said data.

As you might’ve already noticed, data is super important for PSP/TSP to be properly applied. Historical data is continuously analyzed and used to improve. PSP has 4 core measures:

  • Size, which is commonly measured in lines of code.
  • Effort, which is commonly measured in minutes.
  • Quality, which is commonly measured by the number of defects.
  • Schedule, which is commonly measured by comparing planned vs actual completion dates.

PSP/TSP basically takes CMMI’s principles and applies them to software development processes, providing specific solutions to specific problems that we most likely will encounter when working on a project.

ISO-15504

You know our friends over at ISO, they’ve got a bunch of standards for quality and performance in a lot of industries. Turns out software is one of those industries. ISO-15504 is also known as SPICE (Software Process Improvement Capability Determination). Although it’s been superseded by ISO 33001, it’s nice knowing a bit of history on how these standards came to be.

The process dimension in the standard’s reference model defines processes divided into five categories: customer-supplier, engineering, supporting, management, and organization. New parts are published constantly, and the process categories are expected to expand.

There are six capability levels:

  • Level 0: incomplete process, meaning that performance and results are incomplete and chaotic.
  • Level 1: performed process, meaning that processes are intuitively performed, and the input and output work products are available.
  • Level 2: managed process, meaning that processes and work products are managed, and the team knows its responsibilities.
  • Level 3: established process, a defined process that suits the team’s specific needs is tailored from a standard process and deployed.
  • Level 4: predictable process, performance data is available for historic analysis, making future results predictable and controllable.
  • Level 5: optimizing process, continuous improvement is being constantly achieved through process innovation and optimization based on previous and current performance data.

Each level has different process attributes, which are evaluated on an NPLF scale (Not achieved, partially achieved, largely achieved, fully achieved). Based on the results attained for each attribute, a level is provided and the team is expected to improve from there.

MOPROSOFT

Modelo de Procesos para la Industria del Software (Process Model for the Software Industry) has the goal of evaluating and improving the software systems development and maintenance processes. It was developed by the Mexican Software Development Quality Association through the UNAM’s accountancy and administration faculty to provide a Mexican norm that was appropriate for the majority of Mexican software development companies.

It is based on ISO’s 9001:2000 process model, CMM-SW’s process area level 2 and 3, ISO’s 15504’s general concept and the best practices provided in PMBOK and SWEBOK.

It defines three main processes that are supervised and controlled:

  • High direction: provides strategic planning and promotes the organization’s optimal operation through revision and constant model improvement.
  • Management: provides resources, processes and projects, and it’s responsible for making sure that the organization’s strategic goals are accomplished.
  • Operation: actual working force, it’s the one that develops the projects set by management and maintains the previously developed.

IDEAL

IDEAL stands for Initiating, Diagnosing, Establishing, Acting, Learning. This model serves as a roadmap for initiating, planning, and implementing improvement actions. There are multiple steps to each phase in the IDEAL method:

  • Initiating: Setting a context that takes into account key business drivers, building a sponsorship, and establishing who will be taking action.
  • Diagnosing: Characterizing the current state, as well as the desired state, and developing improvement recommendations for each team.
  • Establishing: Setting priorities based on business drivers and area impact, developing an approach, and planning for an improvement program based on the recommendations.
  • Acting: Creating a solution, testing said solution and observing the results, refining the solution based on said results, and installing a final solution after the refined solution fits the needs of the organization.
  • Learning: Analyzing and validating that the results attained line up with the desired state established in the diagnosing stage, and proposing future actions to further improve in the future.

This is perhaps both the most detailed and the most flexible out of the ones listed in this blog post. It provides a very specific and tailored framework that we follow to improve our processes, but nothing is specific enough to the point of being inflexible. On the contrary, IDEAL is recognized for its flexibility, making it easy to pivot if needed.

Of course, depending on your team’s specific needs, you’ll most likely lean towards one of these models more than the others, but the idea is that you should have a clear picture of all the alternatives available to you so that you can take the decision that’ll bring the most benefits to your team.

Is there such a thing as objectively good code?

Software never was perfect, and it never will be, get used to it. “But why?” you may ask. Well, if we knew why, we’d probably be working on making it perfect, but that’s the thing: we don’t know what the hell we’re talking about when we refer to “Software Quality”. That’s right, today we’ll be talking about software quality and why we, software developers, aren’t its biggest fans (spoiler alert: because we can never seem to achieve it).

Let’s start with probably the most important question: what is Software Quality? Well, it’s an ambiguous term that we like to throw around to refer to writing “good code”. Okay then, when can we consider our code to be “good”? That’s a great question, and I’ll be honest here: I don’t know, but that’s not gonna stop me from trying to explain it to you.

I don’t know and you don’t know, so let’s see me try to bring some sense into this mess, shall we?

I don’t think anybody knows for sure what good code is. If you go around and ask a hundred software developers to define what good code is, you’ll probably get a hundred different answers. Sure, they might overlap and mention the same key aspects or concepts sometimes, but that doesn’t mean that they’re referring to the same thing.

There is no such thing as “objectively good” code.

I used bold and italics to emphasize and make sure that you remember it, but I’ll repeat it: There is no such thing as “objectively good” code. You can quote me on that. Don’t believe me? Open a new tab and search “Software quality” on Google (or whichever search engine you prefer, I just happen to use Google). Click on Wikipedia’s page and scroll down a bit to the “Contents” section. Under the “Definition” section, you’ll notice that there’s 5 entries. 5 different definitions of what software quality is, and none of them are the same (if they were, there wouldn’t be 5 different entries, duh).

“But Jorge,” I hear you say, “if there’s no such thing as objectively good code, then there’s no such thing as good code period, right?” And to answer this, I’ll use a meme because all this serious talk is getting boring:

Ah yes, everyone’s favorite answer: “it depends.”

I’m not sure if you’re familiar with thermodynamics. I’m not, but Muse happens to be my favorite band, and they titled a whole album after what is maybe the most important law of thermodynamics: The 2nd Law (not their best work in any way, but it’s still pretty good if you want to check it out). It’s a pretty long and complex law that explains why infinite growth is impossible, how energy becomes useless once it’s been utilized, and it uses a bunch of terms that you might not be familiar with, such as entropy, isolated systems, and thermodynamic equilibrium, so I’ll summarize the second law of thermodynamics for you: The amount of unusable energy (entropy) of a system that doesn’t interact with other systems (isolated system) can never decrease over time, and in fact, is always increasing.

This is probably you after reading a whole paragraph about thermodynamics in a blog post about software quality and good code.

“How does this relate to good code and software quality?” Relax, I’m getting there. Think of software as the entropy of an isolated system, and let’s replace “isolated system” with “computer system”. Now we have something along the lines of “Sofware in a computer system can never decrease over time, and in fact, is always increasing.” Now, it’s not really that software is incapable of decreasing, it’s just that we don’t usually go around taking away features and pieces from a computer system (what would be the point of that?). We usually want to add features, not remove them, so let’s make one final adjustment to it, shall we?

“Software in a computer system doesn’t usually decrease over time; on the contrary, it tends to increase”

That’s more like it. Now that we’ve wrapped our minds around that idea, I’ll throw one final piece of thermodynamics knowledge at you before we go back to software quality (I promise this one’s short). Entropy usually lacks order and predictability, and it tends to gradually decline into what we commonly know as disorder. Sound familiar? It should, because that’s exactly what code does; as the amount of code in a project increases, the less control we have over it, and the more we struggle to make sense of what it does, when it does it, and how it does it.

The evolution of the Beijing and Shanghai subway system. As you can see, the more we add to it, the “messier” it becomes (although I’m pretty sure it’s actually very organized, it just looks messy)

This blog post has been pretty pessimistic, hasn’t it? I mean, we’ve only focused on things that we can’t control, and how bad things get when we lose control over them, but I already knew that would happen. After all, I’m writing it, and if I wanted it to be any different, I would’ve changed it before I posted it. You see, I needed you to comprehend something before I went on to explain how you can make better code: there’s things that are out of your control and always will be, and you need to accept that before you can focus on what you can do instead of thinking of what can’t be changed. You need to let go of what you can’t control to grasp onto what you can.

What can we control then? Well, there’s a couple things that come to my mind right away:

  • You can control whether or not your code works. That should be easy enough: does it do what you want it to when you want it to? then congrats, it works.
  • You can control if it meets the goals of your project. If your code works but it doesn’t really add any value to your project, then why did you even write it in the first place?
  • You can control its readability. If no one can figure out what the hell your code is doing, how are they supposed to maintain it?
  • You can control its robustness. In case your code were to fail, does it handle it gracefully, or does everything crash and burn? If it’s the latter, something’s wrong.
  • You can control its elasticity. How well does your code handle changes both inside and outside of it? Please don’t go around hard-coding things, I beg you.

There’s a bunch of other things that you can control about your code, but that’s not the focus of this blog post (although, with how much we’ve covered I’m no longer sure that the focus is myself). We want to focus on software quality and how to achieve it. “But didn’t you say that software quality is an ambiguous term?” I hear you say. Why yes, I did, thanks for remembering that. As I mentioned, Software quality is subjective, and while that’s a source of confusion for most, we should consider it a blessing. That means that you have some room to bend the rules, so you can create your own definition and your own software quality standards.

Bending the rules, get it? I know I’m not funny, I’m trying my best, okay?

Now, I won’t get into detail about how you decide what you want and what you don’t want your code to be (there’s complete organizations that focus on this) because that would take me way too long, and it probably belongs in its own blog post, so I’ll skip over to the next step. Once a definition and a standard have been decided, how do you make sure that they’re being applied to your code? Well, first, you want to make sure that you adhere to the definition and standards that have been established as closely as possible. Now, while following these standards might ensure that your code is at least decent, it doesn’t actually guarantee that you meet every criteria. For that, you’re going to need some help from these two guys: Software Quality Assurance (SQA) and Software Quality Control (SQC). They sound very similar, and they are, but there’s one key difference:

  • Software Quality Assurance makes sure that the quality standards are being met during the software engineering processes that produce products (software). It focuses on the process and ensures that everyone is following proper protocol to create a product that’s up to the established quality standards.
  • Software Quality Control makes sure that the products produced meet the quality standards that have been previously established. It focuses on evaluating the software produced and ensuring that everything that comes out of the software engineering processes are good to go.

In short, SQA focuses on the production, and SQC focuses on the product. Simple, right? These two processes are mostly responsible for the quality of the code and software that’s being developed, and while we, developers, like to laugh it off and joke about hiding things from Quality Control, truth is that without them, most of our code would likely be trash (I know you didn’t want to hear that, but someone had to say it).

This blog post is already longer than it needed to be (sorry about that), so we’ll finish it off here. We’ll be talking more about other aspects of software quality in future posts, so you might want to keep an eye on that. As an apology for making this post so long, here’s a picture of a cute cat:

Funny cats are always cute, but cute cats are not always funny, so I’m killing two birds with one stone here.