I Don’t Want Just Anyone To See My Data

--Originally published at Debugging My Mind

Confidentiality as I’ve mentioned before, is a key property that any application must have, you don’t want your personal data that any app you download so easily asks for to be out there for anyone to see, otherwise we would just be carrying around papers with all our info taped to our backs.

This is why as developpers we have to always assume any info we ask out of someone about them can be personal, we don’t know whether someone thinks their name can be public, or someone who thinks it should be kept private unless absolutely needed otherwise, so it’s better to keep things as confidential as possible by default, rather than publicly out and easy to obtain.

One of the main methods to keep confidentiality of data we have stored of our users is by the use of encryption. Using a hashing algorithm we change that plain text info into an unreadable mess that requires computation to solve, adding salt and pepper we make it very difficult to go back to the original text without  the key, making it so that data cannot be precomputed in bulk as a rainbow attack.

Handling data of young children makes it even more sensitive, as their personal information is being handled by other people that have to be responsible for it. This wasn’t an issue back when we were younger, but these kids live in the digital age, where the internet is predominant and their info can easily be thrown out there for anyone to see without their consent or them even knowing. It wouldn’t be nice us as adults right now to find out that back when we were young and couldn’t do anything about it our information was put out there easy for people to find and read.

It’s for this reason that it’s better to assume all data related to our users as sensitive, be it their name, age or gender, and keep things encrypted and with needed authentication to access, since their confidentiality and the app security is our responsability as developers.

What Happened With My Data?

--Originally published at Debugging My Mind

Integrity. The property of data not being altered or destroyed by an unauthorized entity; such a property isn’t so easily assured to a client, and that’s why we needed a plan as to how we were gonna do it.

Our application consists of 2 separate main modules, the school management part of the app, consisting purely on the web implementation, keeping track and being able to register teachers, students and checking on their information, while on the other side we have the game itself, responsible of handling all the logic to provide the levels, with their defined difficulty, as well as keeping track itself of temporal progress data from each student.

Our main concern related to data integrity surged from the communication between these modules, more specifically, between the game and the database, since there has to be a stable connection to be storing each separate level, as well as obtainning the new predicted difficulty for the game, there is a chance of data being corrupted or lost in the process should a connection error occur.

Because of the glaring problem that the lack of connection of the game with the server creates, we decided to design the game itself in such a way that it can work independently whether there is a connection to the database or not. To do this, the game will do several checks before starting: first, it will try to connect to the server to obtain the user’s required information, such as their profile, predicted new difficulty and money for the store, should this connection fail there will be 2 options. One of them is executed if there are no locally stored information, where the game will begin in a “default” mode, where the player plays through the game as if they started back from 0, allowing the game to be played even if there are technical difficulties with the server. The second option is to load locally stored data, this means that the game will not only save data to the server itself, but also store data in the computer that is played in order to be used offline, this would disable the automatic difficulty prediction of the game, but would allow a player to continue playing where they left off.

Once connection is re-established with the server, the game would utilize the offline data to continue the gradual difficulty it had stored and send it all together back to the server, in order to create an accurate new prediction of difficulty for the next session. This is done so the progress that a student has made offline isn’t just discarded, and that the old difficulty prediction isn’t used as the local one might be way different at this point.

Going deeper into the database, we have a set difficulty for each player’s profile, generated from the average results from the previous levels played, this means that in the case that level specific data is lost when saving, the difficulty where the player is at itself is kept intact, allowing such risks to take place without causing damage should they happen.

Another mechanism we had thought of as some kind of backup for the player’s difficulty, was some kind of seed that would save the difficulty setting in a small and simple string, although, this wasn’t possible to implement due to time restrictions, and remains as an idea for future implementation.

The idea of backup is a safe way of keeping data integrity even in extreme scenarios of technical and hardware failure, but this kind of solution requires resources like money and time, as there is the need of separate drives where the backups need to continuously be written, as well as taking the time to make sure that they actually work.

Let’s Get Paranoid

--Originally published at Debugging My Mind

Everyone is out there to hack you and get your personal/private information! Well, maybe not everyone, and maybe it won’t be happening to you at all in the course of your life (or at least not without you noticing), but this is a real danger that comes with the use of technology, specially the internet, where it’s so easy as to start a blog like this one and write off of your mind without a care in the world.


Image by Henri Bergius

So as an user you hear the usual stuff, “Get an antivirus”, “Use different strong passwords for your accounts”, “Don’t click on the DOWNLOAD HERE links or enter your credit card information to win the amazing prize you just got in this random website“, but even if you were to follow all these pieces of advice you keep hearing around related to security, you might still lose access to an account or your information taken, and this is where not only the final user has to take their appropiate security measures, but where our job as software engineers begins.

What good is there that our users take several extra steps to care for their security if the software we create has a clear fail, an easy backdoor to access, and then when the information gets taken, it’s as clear as plain text, which they can then use as a domino effect to cause more damage to us and our users.

Here’s where the title comes in, let’s get paranoid, not only about our security as users of a piece of software, but as developers of it. Things like security are often taken as an “opt-in” mechanism, cutting the corners and the extra work that it takes to add a probably already tested and reliable library that creates a new layer of security, mostly because the sole research of the method seems like an unknown monster to tackle, you don’t know if it’s gonna be easy or tough, so we just choose to ignore it.

This is where having all the horror security stories help so much, to get scared, get paranoid in such a way that we realize how, adding more and more of those low-cost security layers (which can be also actually easy to implement) make it harder and harder to make our application a target to attacks, sure, we can’t promise that it’ll be the most secure, or that it will be impossible to breach, but most of the things that hackers try on their targets are first and foremost the simple attacks, the easiest and most obvious ones they can think of. They’re not gonna try to pull of a super complicated and convoluted attack out of the blue, if you were gonna try to enter a restricted area/room, wouldn’t you first check if the door is actually locked? it’s kind of the same principle, and the reason why adding those simple extra layers of security is so important, we gotta in the very least make it a bigger effort for them to enter the room.

An example of this is encryption, instead of encrypting the “sensitive data” why not encrypt practically everything, it’s cheap, it’s easy if you’re doing it already with some registries, and you can add salt and pepper to make it even more secure even if they were trying to rainbow attack your encrypted tables.

There is a real issue with adding too much security, which is the trade off of usability that we’ve mentioned before, the more layers you add, the harder and more time consuming it actually becomes to access and use your software, which is a conscious decision you gotta make depending on what the goal of your application will be.

In our case, where we’re currently developping a web application for elementary grade children to help them practice and reinforce their math knowledge, there are certain things to take in mind. While we don’t store much sensitive data about the children, we’re highly responsible about it, since they themselves don’t even know their own personal information is being stored in a place that can possibly be attacked and stolen from, and putting it out there on the internet, we have to make sure that such a thing is very unlikely to happen. For example, going back to my earlier recommendation, we’re gonna have to encrypt all the data we store, even if it isn’t sensitive one, better safe than sorry right? better be paranoid.

On the other side, we’ll need to handle authentication for the different level users, we don’t want a student to be able to access the teacher’s records and data, nor do we want a teacher to be able to see outside their group as it isn’t needed. Different roles with different levels of authorization will be needed to manage the web application, handled through their very own accounts with passwords (which then goes into the whole issue of closing their session if it’s being inactive for long enough, avoiding yet another problem).

While it’s true that there are more thing we can do to make our application safer, there is also the short amount of developping time we have to deliver the final product, which means we have to choose what precedes in importance and what we can or cannot add to development.

In the end I do believe one of the goals has been met, I for one, have honestly become way more paranoid, and while I don’t mean this in the wrong way, like I won’t be losing any sleep or being constantly anxious, security is a real thing that keeps going through my mind in the daily use and making of software, and I think that’s one of the most valuable things I’ve gotten from the security course.

The Security Triforce

--Originally published at Debugging My Mind

security triforce

It is said that if you gather all three pieces of the security triforce, your software will be the most protected. In all seriousness, this is known as the CIA/AIC security triad and it refers to the most common topics that are focused on when protecting systems. It refers to Confidentiality, Integrity and Availability.

These next months my team and I will be creating a web application focused on helping the 2nd grade students of a particular school, where security will become a topic of importance for the development of this app.

At first glance you might be able to discern what these 3 terms mean from the word alone, but I’ll go through them quickly and specify what they represent, as well as how each of them will (or not) be necessary on the app that we’ll be developping.

confidentialityConfidentiality: It refers to the ability and the property of keeping delicate and important information hidden or encrypted in such a way that unauthorized individuals are incapable of accessing it, and even in the case of it happening, being unable to understand it.

In order to customize each of the children’s experience with the math mini-games we’ll be implementing in the application, as well as the reports the teachers will obtain, delicate and important information about them might be needed and stored within the app’s database.

Since the personal information of young children will be handled, we have to be very careful to keep it as confidential as possible, as well as making sure to not keep data that is no longer used (for example, children that have left the school or that just won’t be using the application anymore shouldn’t have their data kept after some time has passed). I believe this specific security property is the most important for this project, these children may choose to provide their information to specific sites or applications of their own will in the future, but today this is a choice made for them, and one that can’t be taken so lightly as to carelessly handle their information.


Integrity: This refers to the importance of data not being altered or destroyed by an unauthorized entity, may it be through the modification of a file, or a change to the system’s configuration. Usually this is found when a file is infected by a virus, or when data is modified mid-way transit through the network, like an email on the internet.

In this case, integrity doesn’t become a property of huge importance for our application, while there are still some measures to be taken, the system will be constantly storing and modifying the data used to customize the exercises for the students, so the corruption of one of these does not become a big deal as it will soon be replaced by another based on the children’s performance.
availabilityAvailability: As the name implies, this property refers to the ability of the system to continue being accessible even if there is an error or corruption of data. This is usually achieved through redudnancy so that if a piece of hardware fails, another one can take over for it and keep the system running and usable.

For our application and due to the limitations in hardware and resources available to us, achieving redundancy and constant availability can be complicated and something down the priority list. The biggest solution we can provide is hosting the application on a separate hired server, not hosting on a school’s computer which can lead to a hardware malfunction. While a hired server (like Amazon Web Services) give us the so liked and wished constant availability and a fault tolerant system (most of the times at least) , this requires a constant fee that ends up being up to the school if they decide to adopt or not.

All in all, regular users of software won’t have this triad in mind when using it, often taking things for granted, we as software engineers have to be careful and make sure our designs and implementations come as secure as possible (what our resources and abilities allow), and most importantly, have all of these features work by default. The default setting should always be the secure option.

Empathy & Leadership

--Originally published at Debugging My Mind

Today I’m gonna talk about two TED talks I watched, one done by Sam Richards, speaking about empathy, while the second one was done by David Marquet, focused on leadership.

We always think of empathy as the stereotipic small definition of “putting yourself in someone else’s shoes”, which is not saying that is wrong, but rather how deeply we actually think of it. As a student, I’ve been able to travel to different countries a few times, the most special one being the time I lived in Japan for a summer, living with a japanese family, studying the language, experiencing their culture and meeting people from a lot of different countries. An experience like this is, I believe, one of the biggest “eye openers” that a person can have towards the world, how the moral rules, the stereotypes, your opinion on plenty of things change, when you truly notice how small of a world you were originally living in.

I can safely say that experience has made me a very open-minded person, and it only goes better from that, you learn to understand, you start asking yourself “why reject things without a reason”, or “what’s truly wrong about something”, there’s plenty of moral rules here in Mexico that we all follow or consider normal without ever asking ourselves why, or even doubting lots of them at all, it can lead to people hating, judging, going out of their way to make their voice be heard when they complain about someone when it’s doing absolutely no harm to them, just because it doesn’t fit in the “normal” scenario that they’ve lived in for so long and that they don’t want to have broken.

That’s the thing with empathy, just like the talk says at the very end and that I liked a lot, specially cuz it’s a personal thing I can completely understand and have experienced. Once you manage to walk even an inch on someone else’s shoes, in a completely different and radical position to yours, one that makes you think, analyze, try to understand, suddenly the smaller things, the ones you deal with in your daily life, become so much easier to deal with and understand.

On the other hand we had a talk about giving leadership, giving control, not taking it. I couldn’t agree more with it, I have honestly always had a gripe with the educational system we still use, it’s a very old model, made to make soldiers, people that follow orders without questioning, making followers in a world that requires thinkers right now. Why do we keep forcing people to learn things that they won’t use, and I’m not talking about common knowledge that should be taught, like say basic maths, but are things like chemistry, calculus, physics, on a higher educational level, really going to help absolutely anyone? I know plenty of people who live a happy and successful life, without having ever used it or worried about it, so that means it was wasted time right?

Why not push people towards the things they like, the things they’re good at, the ones that are going to be useful for them in the long run. This would basically make, in a nutshell, thinkers; people who are excited to work in what they do, enthusiastic in making things work or try new ideas, and capable and wanting to make their own contribution to the bigger picture. What our country really needs right now is thinkers, not followers, for it’s the only thing stalling us in the place we are, allowing the corruption and bad practices to carry on.

QA & Architecture

--Originally published at Debugging My Mind

I like correlating what I read with my own developping experiences, trying to notice what I’ve been lacking, what I’ve been doing and how it can be similar to what the book refers to and finally, what I’ve been doing wrong. The last one tends to come up a lot in the scholar environment, as a lot of times we strive to get things done in a certain way or period of time and with a different incentive compared to a job (having the reward only be grades and learning instead of monetary compensation) which lead us to a lot of mistakes and  complications.

When it comes to a quality assurance plan, we most definitely aren’t used to make one, we usually only do proper testing once things start breaking or when the functionality is “completed”, leading into a bunch of work fixing and dealing with lots of bugs.

A really detailed plan seems very necessary for big projects, ones that will be on development for a long time and require the utmost efficiency to be on proper schedule and budget, but I don’t think this will be the case for all projects, it feels like the book, while mentioning a lot of the tools for developping these plans (like taking in consideration defect tracking, unit testing, source-code tracing, technical reviews, etc.) it doesn’t mean we have to apply every single one of them, meticulously in every project we work on, I believe the goal is to learn to sit down, analyze the problem we’re solving and what we’re developping and choose a good schedule of how quality will be checked and assured throughout the process, which in essence, becomes a plan, a quicker one it may be, but a plan in the end.

As we go into projects that involve multiple people, it definitely becomes necessary to record everything we do, since most of the times I think the biggest problem ends up being communication. Did someone fix this already? Has somebody noticed this bug? Is anyone working on this particular thing? All of these questions can be answered with a plan and proper communication. Maybe scheduling bug fixing to particular days, update the project management tool that work has started on a specific problem, write down bugs that have been found and if they have been fixed or not and how much time it took.

In the end, I believe a huge document that very thouroughly noted down every tiny bity detail might be way too much for a lot of projects, reserved to that extent only for very big or continous ones and it’s important to also consider that, how are we gonna tackle a task at hand and how much detail does it truly need? I also agree with a particular thing that keeps being mentioned, if a documentation and plan is made, if a schedule is created, they are for a reason, they should be followed and not ignored, what’s the point of using a lot of time into planning if it’s only to be scrapped and disregarded, thing I feel can be common in scholar projects.

After all the QA talk, we dive in onto System Architecture and oh do I think this one is important, I think the word itself is what should describe a Software Engineer and a programmer, the key difference, we are to have the technical and logical background to be able to design the solution to a problem. To not only make things work but have a reason for them to be the way they are, be it scalability, compatibility, performance, among others.

Most systems don’t exist in a secluded environment where they don’t interact with others, where everything will be perfect as long as they stay in their little bubble, that’s why we need architecture, not only that, but because the big system in the end will be made of smaller subsystems that handle different tasks, should one thing break, why should it mean the whole application stops working? (unless it’s a critical component for the whole application’s functionality).

Thanks to notation like UML we’re able to create diagrams that can be understood by anyone who knows the notation, making the design readable and easy to implement, which it should be, since the goal is to make an almost series of steps of how the system works and will be created, not to make a super complex mess that only the architect will understand.

While I believe there will be a class specifically orientated to this topic (or at least I hope there will be), there have been some considerations on it on others, where we have to start getting used to minimize dependencies, separate tasks, be able to make changes with ease, all good practices that lead us towards the path of being able to be good architects down the road.

Programs didn’t use to be made with the thought that they would be given maintenance in the future, compared to today’s software, running old systems ends up being a complicated task, needing very specific environments and conditions, while newer software is readibly patchable and adaptable to the future changes, it’s a living proof that it has become an important point in software development.

Planning before the planning

--Originally published at Debugging My Mind

Yes, just when we thought just planning was good enough we have to tackle this new idea, preliminary planning, but is it really new?

There’s a lot of activities and processes when working that we do without much thought, that pass on as quick non important topic, but after some reading and a bit of reflection, it becomes clear that they are actually  way more relevant than they seem.

How many times do we start a project based on an idea, just to realize not everyone is actually fully attuned to it, I certainly have, a lot of the times we assume that everyone understands what we’re going to do, but it ends up causing some confussion and problems down the road, it even prevents potential improvements to the project due to not fully understanding the goal. We all find it silly to have to physically write the actual objective and vision of a project, but it ends up being more useful to understanding what we’re all going to work towards in the end.

In school work, we often don’t truly have an external group making the huge decisions for our projects, it’s mostly up to us (with a bit of feedback from teachers most of the times) to decide and do changes as we see fit or simply choose the features we need to include in our project. This ends up being messy most of the times, overestimating what we can add and leaving things out that we couldn’t complete or even changing our end goal depending on the final product we were able to create.

There’s definitely little to no preliminary planning in most projects we do for school, mostly due to time restrictions and the advancing nature of the work, where we’re being guided by our teacher, but there’s definitely a lot of opportunities to start making use of what we can and specially on what might definitely need it. I believe these planning tools work as guidelines of how much more preparations and documentation we can do to make the implementation part easier, but it’s up to us to recognize depending on the project, which ones we gotta use, which ones can be taken out due to time or goal and which ones end up being critical to make things easier, even then I do think it’s worth knowing and considering all of  them and how they make the difference between a programmer and a software engineer.


The Man-Month

--Originally published at Debugging My Mind

Whenever a new project arises, becoming a potential client, software devloppers jump at the opportunity promising and proposing fantastical and optimistic things, it feels like when you’re told “tips” for job interviews, you want to exaggerate and be as optimistic as possible, nothing bad, only good stuff. Thanks to this, everything becomes “possible” and very attractive time estimates are given really fast, even when estimating the time for a project can be incredibly difficult and probably needs time itself.

I think that’s the main problem, nobody wants to say that something is impossible or that it will take really long, everyone wants to “look good” about their proposals so they get the job, but isn’t failing and delivering late, or even failing the project altogether worse?

Some time ago when I began studying software engineering, I had teachers be very persitent with saying “you’re not programmers, you’re engineers”, we are not being taught to be programmers, anybody can do that, and the way software development is handled a lot of the times is from the programmer perspective. “Sure I can do this”, “Yes this can be done in X time”. Being engineer means being able to plan things, to make estimates and evaluate the given problem, use logical and exact knowledge to backup their decisions.

It’s true than in many other work areas, including engineering benefit from just adding more people to projects to make them finish faster, divide all the work load and ta-da~ it works, but this is definitely not the case with software. Why would it sound better to make smaller teams for projects instead of adding more people, simply because in my own experience it takes more effort to coordinate everyone, to have each of the members be in sync with what’s going on and what must be done. Work can’t be as easily divided and assigned separately unless it’s completely independant from the rest, which is a difficult thing to come by in software. You can try and make modules and functions as less dependant as you can, but they’re still gonna need to integrate with the others and communicate, you can’t have the groups of people working on them completely isolate from each other and expect everything to go well.

I agree with how the “Man-Month” formula is a myth for software development, how we’re currently on the transition of understanding the fact that this engineering area of work isn’t the same as the rest, how the “common rules for work” don’t translate that easily and how a general mindset must be done to correct all of this.

Planning Keeps Going

--Originally published at Debugging My Mind

I’m gonna take a different approach from now on, instead of mentioning things the book is saying by chapter I’m just gonna give out my opinion on the topics it talked about, whether it’s one chapter or the other will be up for guessing but it’s probably more important to identify the topics themselves and what I think of them.

First off we have staged delivery, dividing the project in different stages which all need to be able to be delivered even as a finished product at different intervals instead of having one big thing that is all delivered at once a single time at the end of a project.

I think this kind of project model fixes not only a lot of the developping issues that exist already, but also a lot of the “business” or non developping issues that stakeholders and users have with development itself.

People want results, they don’t want to be told “oh it’s going good” or “oh it’s X % done” and not actually see the fucntionality or what they’re going to receive yet, sadly, since software development is not a tangible thing except for documentation for a long time before you can see the actual program working (at least most of the cases with the usual developping methods) it’s bound to face a lot of credibility issues from the stakeholders and is one of the main problems we see these days.

For me, staged delivery seems to be a really good alternative to face all these problems I mentioned, being able to deliver something tangible in periodic releases involves the end user a lot more, to be able to meet their expectations and for them to see continous results from the development, which might just be worth all that extra effort that can be created from the new overhead required for multiple deliveries and version control.

It definitely doesn’t seem something too simple to transtition on, as pretty as it may sound from explanation alone, but I do think it’s something I want to try for myself and see how it goes, no matter how small the project is. (Having a big list of deliverable milestones and documentations definitely sounds like a big chore and a turn away like the book mentions, but hey, we gotta do it eventually, might as well start early and save ourselves a lot of bigger trouble later).

Next up we got change control, oh boy change control. As a student we often find ourselves stumbling trying to “apply” some of these concepts throughout version control on git, keeping code on a repository that can rollback if needed and help control the whole thing.

I actually think we don’t get enough sort of guidance or classes or teaching that revolves around this sort of stuff, as students we’re usually hesitant to change stuff once we got it going, considering change as a meeting between all members to do something big to the whole project can be difficult after the whole initial idea and planning has gone through, and not only that but even as we try to use git and repositories to keep control, most of the times we’re never fully sure how to utilize its whole potential or what’s the “right” way to approach it, leading into issues and more problems than we want.

I agree it’s important to be willing to consider changes specially propose and look at them in the first phases of development like requirement and planning, where you can do all these changes basically for free unlike in the coding where things get a whole lot more complicated. Clients can be resilient to make big changes way down the road and it’s important to make it real clear what sort of cost something like that will take but at the same time it’s our job to try and squeeze out all the possible wanted changes as soon as possible early in development when they can be actually done.

One More Book – Going Into the Tar Pit

--Originally published at Debugging My Mind

Another day, a different book, this time looking at The Mythical Man-Month by Frederick P. Brook. What a weird title might I add, which is also a pattern that comes into the chapter titles, but in the end surprisingly, they provide very interesting analogies.

We start off with one of those, how animals and beasts would struggle on tar pits sinking down no matter how strong or small they were, just like that, programming teams can sink down the tar pit of a projects work, may it be a big or small team, nobody is exent of the dangers that software developping entails for the success of a project.

Thinking of programmers, a lot of people think of the small team in a garage, developping a program, just like anybody can do it, why the need of big corporations with hundreds of developpers and employees. Well yes, anybody can make a program, but it’s the difference between a product, a system and a system product that marks the quality and trascendence of a lot of the results, each as much as 3x harder than the basic one. As a lot people like to see it, everything is nice and pretty until you have to document everything, explain every nook and cranny so anybody can get involved, and even making it work with other programs, this is what sets the difference to have a program become a system product.

So with this comparison of the normal programs and a programming system product, and how a software developping project is compared to a tar pit, why is then that so many people want to get into programming or are so eager to try it out. In my opinion that’s the most interesting thing presented in this chapter, why is programming so likable? even to someone like myself, who is majoring in software engineering, couldn’t simply answer that question in a way anybody could understand, or I didn’t have the proper way to explain it, and another way of looking at it, I didn’t even know what I liked of it myself.

The book here presents you the best description I’ve heard of programming, what is it from it that attracts so many people and makes it likable, what makes people continue and persevere even with all the woes and difficulties that come with it. Long story short, it’s how you are provided with a relatively easy environment to experiment, to let your creativity cut loose, and not only that, but the result being something real, something interactive, something that “moves”. It’s this end result that makes so much excitement for what you’re creating.

I’m excited to see what the rest of the book has in store for me, for I am one of the people that thinks the joys of programming outweighs the woes, and I’m excited to see what boardwalks will the book provide me to walk across the tar.