This course is the first time I have Ken as one of my teachers. I had heard many people describe his style, but it’s nice to finally get to experience it. So far I’ve had a very good time with the class and don’t really have any big complaints. I think I’m in a situation than not many of my classmates share. Kens classes look to fix the many problems with traditional learning, which leaves many students just looking to pass and not learn. In my case, I don’t have a problem with traditional classes so I’ve never seen ken’s courses as a sort solution to me not truly learning.
I’ve really enjoyed the course but there are some classes I still prefer to take in a traditional way, like maths and core programming courses. I realize there are many cons to traditional courses, but I may just be used to it so much I call it personal preference. I have years of experience with normal teachers, and I’m still not as experienced with learning Ken’s way. Maybe with more experience I’d feel comfortable enough to take those courses differently.
To end this blog, here is a video some classmates and I made with our final words on the course. I’m somewhere around 2 mins.
Software maintenance is a part of development is one that I haven’t experienced before, but I know is very important in a professional environment. When studying, projects are made, handed in, and forgotten. Professionally, that isn’t the case.
Software Maintenance, as defined by these NCSU slides, is whatever modifications may be made to a program after it has been released. There are many reasons for maintenance, but the main ones are adding features and fixing bugs.
There are 3 main types of maintenance:
Corrective: Fixing faults with the system, like bugs. One needs to find the source of the problem, change the code, ensure no new errors were introduced, and update documentation if needed.
Adaptive: Make the system work in a different environment, OS for example, and ensure everything still works.
Perfective: Form the slides “software maintenance performed to improve the performance, maintainability, or other attributes of a computer program”.
Preventive: Even if there are no problems, improve the code to make it last longer. This may include switching to new technologies, reengineering, and refactoring. Functionality isn’t commonly added in this case.
User Interface (UI) Design focuses on anticipating what users might need to do and ensuring that the interface has elements that are easy to access, understand, and use to facilitate those actions.
In other words, making sure users are able to get stuff done with whatever software you make. There are many ways to design interfaces and one of the most commonly known ones is designing a Graphical User Interface, which is a process that involves User Interface Design.
Successful user interfaces always tend to be consistent and predictable. These qualities allow users to not have to learn UI’s from scratch for every new piece of software. The main categories for UI elements are (also from usability.gov):
Input: How the user will manage to communicate with the program. In many cases it may be text, but it also includes how you can make a file a to give the program instructions.
Navigation: Elements that allow you to get somewhere inside the program. They aren’t what the user ultimately wants, but are necessary for him to get there.
Information: How information is communicated to the user. Most of the time this is done by text, but images can also show information.. They may change depending on how much info should be given to an user without being overwhelming.
Containers: Organization elements typically used to store more important information inside.
After a long journey, we are at a step in SDLC where we have implemented what we think will solve the initial problem of match the initial requirements. After being done coding, it is very possible that the system has some mistakes and releasing it as is would end in a very bad disaster.That’s why Verification and Validation are parts of SDLC.
As a whole, verification and validation refer to making sure the project does what it’s required to do. Still, there are differences between the terms.
Verification focuses on the project as it evolves in development. According to SoftwareTestingFundamentals, it happens at every phase of the project to ensure you are going in the right direction. The main goal of verification is to make the product meet the requirements. This means we are dealing with the direction the projects is taking.
On the other hand, validation focuses on the code and making sure it works. Testing is the most obvious and important way to validate. At this step you want to ensure the software is ready to be used by its intended users without problems.
The order of both can be altered but verification usually comes first.
One of the topics from the semester that I understood the least was software design patterns. At the time, they seemed very abstract and I didn’t go in depth enough to understand any one of them. Since I haven’t worked on any big projects, most of the benefits that are mentioned do not apply to me, yet.
I got interested on the topic after watching the following youtube video by funfunfunction. It goes over what Composition is and its advantages:
Simply put, composition is making new types by having an instance of another type for its functionality. It’s different from inheritance in that you have, no are, another object.
What made this video specially relevant for me was that in POO (OOP class), we learned about inheritance as one of the main benefits of OOP. Inheritance to me was one of the better ways to achieve code reuse and I thought it made a lot of sense when talking about OOP. Since taking that class I very often try to make inheritance work with my (school) projects.
Learning that one of the main concepts I had learned in POO was considered bad led me to understanding a one of the main differences in coding for homework and coding as a job: Planning. Ken has even mentioned this in class but now I’m able to relate it to myself. After watching the video, I felt like my way coding to date had been wrong, but I slowly came to understand how planning affected the situation heavily. In the video’s example, not knowing the future was the main reason why inheritance didn’t work. When I’m doing homework, Inheritance works perfectly because there are no users and I know how the code will work before I start. That situation just doesn’t happen in the real
JUnit is an open source test framework for java. It focuses in unit tests and according to Wikipedia, it is the most popular external library for java on github. Kent Beck is one of the developers of the project.
Here is a simple code example to show its syntax:
Calculator calculator =newCalculator();
int sum = calculator.evaluate("1+2+3");
On a very basic level, JUnit “asserts” or checks that both values sent to the function are equal. You should send the output of a function and its expected output. If everything works, JUnit won’t report errors. If there are errors, JUnit will try to give a detailed explanation of which parts of the input didn’t match.
JUnit has any more tools to make testing easier. In a large project, the amount of tests will be huge so there are many ways to group tests on different classes and methods.
JUnit can be used for Test-Driven Development. This technique consists on firs writing a set of tests that a program should pass and then making code that passes them. This kind of focus means that you have to know what a piece of code will do even before you write it, so you’re required to have clear idea of the code. Knowing exactly what to do allows you to code only that and not lose time thinking about functionality at the same time as programming.
Here is the official FAQ to answer any specific question.
In the open source world, there are many ways in which people can work. Eric Steven Raymond wrote a very good essay on the topic called The Cathedral and the Bazaar. The author talks about the way he has experienced the different ways of managing a project, which he calls the Cathedral and the Bazaar.
The Cathedral style is when most of the work is done by few people. Frequently involves complex projects where only some select individuals fully understand the code, so they are the only ones to contribute to its development. Users find themselves in a situation where they have to rely on these people because they aren’t allowed or capable of helping.
The Bazaar style is where everyone is encouraged to contribute and a community is formed around a project that is responsible for fixing and adding features. Releases tend to be much more common since a big number of people are committing changes all the time.
Eric talks about how he believed the Cathedral style to be necessary for some projects, especially complex ones. His perspective changed after a project became very successful by using the Bazaar style. That was Linux.
An operating system kernel is a very important piece of software and is usually very complex. Apart from complexity, a kernel is very big. For Eric, these signs pointed to a Cathedral style, but Linux became even bigger thanks to its bazaar approach.
A big part of Linux’s success is thanks to its relationship with developers. Linux has a kind of reward system where good contributors area admired.
Size is also a important since Linux really benefits from the great amount of developers. I find it astonishing that thousands of people are working on a single software project, and are still able to achieve things
Software Implementation refers to a very broad set of steps to take to get a program ready for release. As the name implies, it has a lot to do with how the code is written and presented. The key aspect of Software implementation is that you move from planning (what you will do) to doing (including all necessary tools). Software implementation also tends to include testing and some aspects of management.
Knowing everything about software implementation isn’t an easy task due to its broadness. Peter Lo lists of the software needed for successful implementation:
Over the course of the document, Peter mentions many options for each category, but he also mentions that each project has different needs. A good programmer should be able to choose the right tools for the job and not stick with what he already knows. Familiarity is an important part, but sometimes there are way mare advantages to using other options.
I found the presentation to be quite intimidating since I don’t even know half the categories he lists. I found this list to show very clearly that software engineering is much more than just programming, and this list doesn’t go into what goes on outside the software aspect of engineering.
Solutions to common problems that look to save time. They are not finished solutions but a path that can be taken to get a faster start. They can be: Structural (Relationships between modules), Creational (Dealing with the creation of objects), and Behavioral (communication between entities).
Adapter (Structural): Modify interfaces to allow classes to work together.
Command (Behavioral): Make an object for requests. You can now treat requests in many different ways, like putting them in a queue.
Abstract Factory (Creational): Interface to create families of related objects without
Dofactory has many examples to see how these patterns can be used with real code. They still provide many diagrams that explain the concepts behind the patterns but seeing actual code helped me understand why design patterns are useful. What really made it click for me were the “real world use” examples, not because they seemed like a problem I would encounter but they allowed me to see what kind of problems design patterns can solve.
While the concept behind many of these patterns is very simple, a key problem for engineers is to know when each of them should be used, that’s why I found the concrete examples so useful. Choosing a pattern is a decision that will impact a project and many ways, so getting the best option will surely make things a lot easier.
As our society grows more dependent on computers, the software we run is of critical importance to securing the future of a free society. Free software is about having control over the technology we use in our homes, schools and businesses, where computers work for our individual and communal benefit, not for proprietary software companies or governments who might seek to restrict and monitor us.
The Free Software Foundation is one of the most important organizations when it comes to open source, as it is responsible for some of the most used pieces of software on the world. They aim to make software that benefits the user and is completely free, among many other things. Organizations like these use open source as a way to make trust-able software. As a whole, the FSF represents much more than open source software, but open source is essential for it to achieve its goals.
Open source refers to software projects that make their source code available to all. In many cases, this means one can just download it off the Internet for free. Still, open source isn’t free (speaking about price) software. Open source allows users to see what exactly is going on when you run a program.
Open source puts the power on the hands of everyone, including users. For example, there are a lot of things that you can’t change about Windows, even if the change is objectively better you’re stuck with what you’ve got. On the other hand, open source allows you to change anything you want and, in a lot of cases, make of part of the package.
Having software open to everybody changes much more than just the legal status. These can go from development practices all the way to community building. A big part