Postmortem No. 4

--Originally published at TC3045 – Software Quality – Building Software

During this week we mostly worked on our methodology. We had some big issues in the working style, such as:

  • Issues were extremely abstract and ambiguous. They weren’t measurable.
  • The project is divided in areas, and each of us was assigned to one area. So, most of the work was isolated and not prone to integration issues (which are required in order to learn).
  • Communication was bad, since we thought it wasn’t required.

We fixed them and ended up with the following characteristics:

  • From 4 abstract issues we ended up with about 15 concrete issues, which can be measured and estimated.
  • We integrated as a team and divided issues from multiple areas.
  • We now use our classes to work more on non-isolated issues.

As for the actual work, I participated in the following:

  • Fixing the build of the HTTP broker (some tests didn’t pass).
  • Adding logging to the HTTP broker.

And I also started a course in Lynda about Kubernetes. I have some basic knowledge, but I require more (both for this project and for my work).

Postmortem No. 3

--Originally published at TC3045 – Software Quality – Building Software

In this week we finished our first deliverable and we presented it. We had some complications with it, most of them related to planning:

  • Objectives were very general.
  • Most of us worked with not-so-familiar technologies. In my case, I used Go, a language which I am familiar. But I wasn’t familiar with our database, MySQL, nor the drivers for this.
  • Work wasn’t distributed that well.
  • Each of us worked on an atomic part and we thought joining the pieces was trivial. We didn’t calculated time for this.

But, not everything is bad. I also did some good stuff:

  • Developed a functional, database connected HTTP broker in Go.
    • Used TDD.
    • Wrote documentation in Swagger spec.
  • Defined the architecture of the system.
  • Started using ZenHub.

In general, there are some areas that we need to improve. But, we’re not totally failing.

 

Postmortem No. 2

--Originally published at TC3045 – Software Quality – Building Software

During this week I worked on the HTTP  broker, which will receive data from the sensors platform and will process and store it into the database. We decided to use HTTP instead of other protocols (such as MQTT or some TCP protocol) since the ramp-up is easy for both parts.

By myself, I finished the broker in a basic build, which only has one call and does not connect with the database. The way I developed this was in a TDD, in which I first wrote unit testing and over this I implemented functionality. In particular, I learned how to test HTTP servers. In previous works, I never tested this, since I didn’t know how.

Also related to the broker, I wrote a Swagger documentation file, which will help Carlos in developing the connection between the sensor platform and this broker.

Postmortem No. 1

--Originally published at TC3045 – Software Quality – Building Software

Complementary to the previous post, postmortem posts are more technical and complex. These define what I did during the previous week.  In this particular case, the content will be not so technical as future posts, since it only contains what we will be developing in this semester. So, without any further delays, straight to the project.

BerryHouse is a modern open source technology that allows anyone with a Raspberry Pi, a computer/server and some time to build a small greenhouse. This allows people to cultivate various type of plants monitor their values, such as humidity, temperature and sunlight.

The team is composed by the following:

  • Lucía Velasco -A01631385 (@luciavg)
  • Carlos Martell – A01225920 (@carlosmartell97)
  • Alejandro Güereca -A01631731 (@dragv)
  • And myself, Miguel Miranda – A01631246 (@mmiranda96)

Our stack of technologies (which is still open, so recommendations are welcome) is the following:

  • NodeJS + React (web client)
  • SQL database (to be chosen)
  • Go (HTTP broker)
  • Raspberry Pi + Python + sensors (data input)
  • Docker Swarm (container cluster)

Our first delivery is still on definition, but it will contain a basic cluster with each required services in a “Hello world!” style build, very simple and mainly to prove the structure of the whole system. All the code and issues can be seen on the Github page.