During this semester, I had two classes that taught me new things. I don’t mean about theoric things, supposely I learn this every day. But I mean more about experience. Those classes were TC1018 (data structures) and TC1019 (intro to software engineering). In the first one, I had my first real challenge. To be honest, I’ve never had such a hard time with a course, not even OOP or literature (believe me, in my school was really hard). But this post is not about TC1018 and my crazy teacher. This post is about TC1019, a really different experience.
Since I remember, school has been about teachers speaking in the classroom (sometimes passing slides and spitting random words), leaving homework and handling a test which was about what they said. The same loop over 13 years (preschool doesn’t count). This semester was different. First: a class where your attendance is not important, where the teacher does not spits words, but rather makes you investigate and learn by your own (boosting your self-learning skills), where you grade yourself. One can think in two ways: a) this is the best teacher, since he achieved Teachvana, and he is not establishing any knowledge frontiers or b) this is the easiest course, I’ll probably not do anything and still achieve the perfect score. When I started this course, I was thinking like option B (I still do, a little). But as I learned new things, I started knowing what life is about. You don’t stop learning in school. If you do, you become obsolet in a short time. Returning to the non obligatory assistance and pseudo no-date assignments, I believe this teaches you to be responsible. I won’t deny it: I left most of the posts for the end of the month. But still, I
Imagine that you finished a big, big, BIG project. You’re delivering it. The client sings the final check and shakes your hand, thankful. Two weeks later, you receive a call. It’s your lawyer. Your client wants to sue you because what you did is not what he asked for (at all). So… you have three choices: a) you rot in jail, b) you deliver a new product (that means, you work extremely in order to make it fast and good) or c) you travel back in time and check that what you’re developing is actually what the client needs. I have great news! Although option C is physically imposible, you can do something similar: do it before every project you build.
Verification and validation may seem the same concept. To identify the difference, ask yourself: am I building the product right? This is verification. In the other hand, am I building the right product? This is validation. Both are equally important.
There are three other concepts that are important. One might think they’re all the same, but they’re not. Here are the concepts about errors:
Fault: when your code is wrong, when you forgot a parenthesis, when you named the file with a wrong extension. Simple bugs, generally debugged by the IDE or compiler iteslf.
Failure: runtime error. It happens during execution of the program. These must be debugged manually.
Malfunction: this is a deeper error. It involves the structure and architecture of the program. Usually, this are the worst bugs. These take days to be solved. Sometimes one must restructure from zero.
This process is directly related to software testing and requirements elicitation.
I would classify software in three categories. All of them are equally important, and even though each of them works as one piece, a real big project requires the three of them. The first is the core software: in a simple word, algorithms. By itself, it is one of the most important things. But, without the other three components, it’s good stuff only for other developers. Second: database. How is the information stored. Again, if not used as a whole with the other components, it’s only good for developers. And third: UI. UI is, at least for users, the most important aspect on every program. You have an ugly UI, your program is “not as good as X program, which is pretty”. You have a great UI although your core is not that good, (most) users don’t care, the UI is great and simple, they can use it without problem.
User Interface is the layer of software with which users interact. It seems the most simple layer (technically speaking), but it is one of the most complex to develop. Why? They say a UI is like a joke: if you need to explain it, it’s probably not good.
UI is categorized in three forms, from more advanced to more natural:
CLI: Command Line Interface. Basically, a terminal in which one types text-only commands.
GUI: Graphic User Interface. This time is visual, but it’s fully digital.
NUI: Natural User Interface. The most natural form of UI. It includes sensors, buttons and other devices that can be physically manipulated by the user.
Software isn’t perfect. If it were, then probably just one or two projects may be enough for all the software in the world. Therefore, software tends to fail overtime. Maybe some system registers were modified. Maybe trouble with libraries. Maybe that software got so big it does not work anymore. There are many reasons.
Now: many people believe that maintaining software means to fix problems the software has. But, this is not always the case. Sometimes, maintenance is non-corrective. This means, it’s focused on preventing errors rather than fixing them.
There are four types of maintenance:
Corrective: fix problems that stop the program from working.
Adaptive: modify the program in order to make it runnable on another environment.
Perfective: modify the program in order to increase performance or maintainability.
Preventive: modify the program in order to prevent possible problems.
This is my last article of the partial. And, to be honest, I’m quite tired from writing circa 2 posts daily (I guess that’s my fault for not making them in time). So, this article might feel poor compared to others. I apologize for that.
Software implementation refers to, as you may have guessed, implementing the software. This means: making software actually runnable in different computers.
Some of the tools that one needs to implement software are:
After reading The Cathedral and the Bazaar, an essay written by Eric S. Raymond, I got some thoughts about OSS. I have to admit: I fell in love with Rocket League at a level of stopping doing this blog (this wasn’t typed by me, but it is indeed, a true reflexion typed by a good friend). Anyways. Even that I already admired OSS, after reading the article my perspective changed. OSS was not only the gift of geniuses to us, but was also a development method.
Fred Brooks established one of his hyphotesis in the Mytical Man-Month, which mentions that as the number of programmers grows lineally, the time for the project to be developed grows quadratically. But, in this essay, Raymond proves that this is not necessarily true. In fact: when the number of developers grows enough, time diminishes drastically.
Another big characteristic of OSS is the way people work with you. If you reward the people that work in the project, if you tell them that they are important and if you make them believe they are important, they will work incredibly good, even that they do not receive a real payment.
There are many topics covered in this essay, but these two are the ones I consider more important. I highly recommend reading it.
Wouldn’t it be beautiful if all the best things in the world were free? Free as in freedom. Well, even that that’s not true, I still have good news: some things ARE free. And one of them is Open Source Software (OSS).
OSS is probably one of the best things that has happened to the software universe. Why? First, because it’s free. Second, because it’s good. And third, because it evolves and gets better. But what exactly is it? OSS consists in software which is, essentially, open to anyone to use, distribute and modify. This means EVERYONE can use it. Additonally, OSS is usually developed publicly, by many people (this is explained better in my following article about The Cathedral and the Bazaar).
OSS started with the Open Source Iniciative in 1998 by Bruce Perens and Eric S. Raymond. As the founding code, Netscape Communicator was released free. The OSI was inspired on the free software movement, commanded by Richard Stallman, the founder of GNU Project.
Now: I won’t talk about the benefits of developing OSS as a public project (that goes into another article), but I will explain why is OSS important in today’s world. Here are some examples of important software that is OSS:
Linux: if you think in OSS, you think in Linux. One of the biggest projects, Linux is a kernel used in various operative systems. Today, most of the servers run ina Linux-based ambient.
OpenPGP: online privacy is crucial today, since almost everyone can get on the internet today. Criptography makes this work. OpenPGP (Pretty Good Key) is a program for end-to-end encryption, based on private and public keys.
MySQL: data is in this world. If we cannot store it, we cannot analyze it.
GNU: oh boy, where to start with this one. Linux is built upon GNU.
As an occasional poetry/tales writer, I tend (except in certain times) to reread my works before marking them as concluding. Not only to check spelling or logical mistakes, but to check the quality of them. Well, software developing is the same.
Software testing is crucial for functional software. Why? Because we tend to make mistakes. If we were able to write flawless code, then we might not need testing. But given that only Jeff Dean may be like this, we, the mortal programmers & engineers, must test.
Testing helps us finding bugs and fixing them. Bugs are those little errors in the code, which may be unnoticed at first. But, when you notice them, they are a big headache. So, detecting them during development is extremelly better than detecting them during production.
There are two big types of testing: static and dynamic. The first one consists on reading the code, checking out documentation, asking others to review your code. Basic stuff. However, dynamic testing is more complex. Dynamic testing consists in the opposite of static testing: do not read. Work. Try. Experiment.
Some examples of dynamic testing are:
Alpha & beta testing: give your project to a limited number of users. They will find bugs for you, because users are the worst enemy of programs.
Destructive testing: try to break the program. If you succeed, then you must fix it.
Regression test: when you modify the code a lot, test, because it will probably fail.
A/B testing: given two inputs which are different, see how both outputs change.
As for the levels of testing, we can find:
Unit testing: as it name suggests, it focuses on units, simple modules, each small brick of the big building.
Integration testing: it focuses on the integration between units.
I have already written a post about software design. Why writing another one? Well, mainly because software architecture isn’t the same as software design. Sure, archictecture is part of design. But actually the architecture is itself an area so deep that it needs to be talked about in another post. So, here is it.
Software architecture consists in the basics of software. It defines basic functioning and communication of the components, as well as the components themselves.
Some advantages of software architecture are:
Separation of concerns: this allows to have different people working on different areas.
Quality: defining a good architecture actually helps in improvising quality.
Conceptual integrity: helps getting a general vision of what the software do.
Why is this important? As I have mentioned in other posts, in small projects or prototypes, architecture can be not that important. As soon as the project begins to grow, order is not an option. It is a necessity. Therefore, I consider it a good practice to define an architecture at the beginning. Now: this may sound too much like waterfall method, but that’s not what I’m trying to say. Software changes. If the architecture you developed does not fit your requirements anymore, change it. Don’t be afraid of changes.