I helped marco to finish the Android app for the final presentation, these changes were UI changes, and there are push notifications, that was something more elaborated than I tought, I needed to obtain some keys in our firebase database to let our server comunicate with the gcm service (now it’s called fcm, firebase cloud messaging), then when the user signups or updates their profile, the android device sends a token that fcm generated for the device and we store it in neo4j, then when someone invites that user to a pool or asks them to pay his debt we send a push notification to his phone. Now, to receive push notifications, we needed to register a service in the android app that would listen for the messages, then depending on the messages we would create a different behaviour when the user clicks the notification, or one of its buttons. It was a pain in the ass.
Today we officially delivered the project at the Engineering Expo. There we presented our project to some judges and fellow students. Well, usually the explanation was for the judges, and the students would come just to play the game and try to get through the last level and put their playertag in the leaderboard.
I have to say that my score will be (Or already has been) overcomed by anyone who has a just a fraction of eyes to fingers reaction. What I’m trying to say is that even after one semester of development, I’m so bad at the game… After more than 1 hour playing level 4, I surrendered, and assumed that I could go trought all the levels in 2 hours, more or less, and then I pushed my score to the DB, manually. I think I deserved to be in the leaderboard of the game, even if I couldn’t put it there usiang legal abilities.
But now, onto my kinda semester retrospective.
I feel that overall this smesterI learned a lot about WEB development using NodeJS. In my WEB Development class project I learned about front-end frameworks, back-end development and deployment, different ways to make requests to de server, implementation of MariaDB queries on the server’s routes and the delivery of JWT and local storage.
What I learned in that class was useful for me, so that I shouldn’t need to worry about how to do all of the back-end development on this project, and instead focus on enjoying more the project, setting up Mongoose and MongoDB and designing server tests. I found out that I could really have fun doing those three new things because I didn’t worry at all for the rest of the stuff.
Mongoose and MongoDB were a first time experience for me. From designing the connection “raw” to mLab, to implementing Mongoose for testing and quality of data, to migrating to MongoDB Atlas because the Tec wouldn’t let us use mLab (Because reasons?), and finally updating and designing new schemas and models. It was fun, because it was new and it felt that I trully had time to do some research so I could write some clean and functional code.
And about testing, that was still fun, but it got difficult at sometimes. I’m proud beacuse I pushed myself on designing more specific tests, with more functionalities (such as hooks and a dummy DB, so the real DB wouldn’t be polluted). After I thought that i couldn’t make the API requests tests more complicated, I decided to use the same testing framework to create a scrip for level testing and designing. And that was really complicated, to change a JSON file and see the changes on the DB without restarting the server.
At the end, I think we all delivered what we promised. And I think that every member of the team did their best to do their assigned tasks but also helping each other.
Okay, so now it’s the time where I extremely regret the moment I decided to procastinate the publication of both the posts on week 10. Supposedly at the end of the semester I have to had delivered a grand total of 30 blog entries, 2 per week. And I currently have 26 blogs. That means that if my “I’m sorry entry” (This final review is my “I’m sorry”) could be exchanged for those two missed blogs I was talking about, I would have credited 28 blog entries… I don’t know how could I achieved 30. Maybe Ken took into account the Spring vacations (Semana Santa). But if it’s another reason, I have nothing to ammend it, so I will accept the consecuences.
Still… It was a great semester, with a great project and a nice team.
Okay, so… Have you heard of the famous cake layers? If you haven’t, please, check out my last blog. Else, we can continue!
And just for you to understand the reference. The anchors go the ocean floor… Deeply… To the depths…
So, why did I asked you to read about the security layers? Because security in depth is based on the layers implementation. We already discussed how layers are supposed to function, if you achieve to cover all holes of each layer with the preceding layers, there will be no way an attack could be successful to your system. The thing is that achieving that level of perfection is impossible. Instead, security in depth assumes from the start that the layer method can, and will eventually fail. The layered security only achieves the exhaustion of the threat (Successful defense) or the slowing of it, giving time for other plans of action and countermeasures initialize.
Depth defense also assumes that the hack or breach isn’t necessarily of remote origin, this means that the possibility of physical theft, threats, unauthorized person access, and some other unique events (See van Eck phreaking below).
Usually, taking into account those possible events involve the set up of:
Monitors, alerts and emergency responses
Authorized personnel activity logs
Reports on criminal activity
Remeber that the objective of depth defense is to gain time. Each of the set up new components main objective is to delay the threat, which might not be obtained if we used only technological solutions. The obtained extra time should be used by the administrator to identify and try to overcome the hack.
And I guess that is for now regarding security.
As a mini comment on the course: I enjoyed it big time. It was fun and learned a quite a lot of new stuff
As the 8th blog regarding security, I will talk about the computer security layers. There are some people who state that there are 5, there are some people who say there are 8. What I mostly found during the investigation is that there are security layers as layers in the cake (Including the top frosting), 7.
What you, dear reader, need to remember during the reading of this entry, is that this set of rules can be implemented either by a network system administration or a regular single computer user.
The logic behind the security layers is the following: A single defense will be ineffective or flawed if the defense mechanism leaves unprotected areas, with its protective layer (umbrella), empty. That it’s why the layer’s purpose is to cover those empty spots. Theoretically, the empty areas on each layer would be so different, that an attack can’t penetrate through all the holes, and the service would remain available.
Application Whitelisting: The objective is to install just a set of limited programs and applications in the administered computers. The fewer applications, the fewer possibility there is of a breach.
System Restore Solution: This is one of the most talked security solutions in the classroom. Basically, it consists of creating a plan of action when the hack peril arouses. This would let the user gain access to their files, even if the system is hacked and damaged files remain.
Network authentication: A system of usernames and passwords must be taken into place. This would give access only to authorized users. This means no login without a password prompt.
Encryption: All of your files, disks and the rest of removable devices should be encrypted. This will provide a method for users to not risk an information breach, as the encrypted USB (Or any device), will not be able to be read on a foreign machine.
Remote authentication: This is a very obvious rule. It consists of setting usernames and passwords for remote server access. These usernames and passwords should only be provided to trustworthy users. This is the obvious part.
Network folder encryption: Most of the websites that deal with this topic, consider that this layer should be included in layer 4. I guess it is different enough that I would let it pass as a different layer (As not everyone uses this features). This concept consists of also managing the encryption of shared data. This will prevent users from listening unauthorized access to the network information.
Secure Boundry and End-To-End Messaging: This basically consists of using emails and instant messaging as a secure method of communication, rather of dealing with the encryption from the server to the user and vice-versa.
And I guess that that is a simple and easy summary of the 7 layers. Remember to implement all the layers you are capable of activating or at least finding someone who cans helps you.
This entry is not addressed to regular computer users, but more specifically to engineering students or people interested in network’s security, as the concepts are not that regular. This entry’s topic is the security of the network’s enterprise.
Virtual Private Network
This first category isn’t that much complex, as Virtual Private Networks (VPNs), are more and more widely used by the general users. So I won’t be talking a lot about this. VPNs are a method used by enterprises to connect and access an internal network from the outside, using a more secure network and an encrypted one.
Intrusion Detection Systems
Intrusion Detection Systems (IDS) main function is to aid the administrator in the detection of the type of attack that is being carried to the system. Usually, the IDS also help the administrator find and execute a solution to the problem as well as a plan of action on future detections. These systems trace and record logs, signature and triggered events. Usually, the IDS is attached to the firewall (Which I’m speaking down below) and the network router.
The most popular IDS tools I found are Snort and Cisco Network-Based IDS. Both successfully notify the user real-time, the signatures of attacks made to the network. The main advantage of Cisco IDS is the results obtained in the aftermath of the events (Reassembly of IPs and TCP sessions) and Cisco continuous support to the client. Meanwhile, Snort is open-source, cheaper to implement (Hardware wise), and flexible (Only requires Linux) and has multiple modalities where it can be implemented.
Firewalls, also called Intrusion Detection Devices, are software or applications that work directly in the network layer. As most of us already know, the firewalls protect the internal network users from the rest of the world, and vice versa. The rules set in the firewall can block specific functionalities and applications if the port is marked as prohibited. They also can redirect incoming requests from one port to another. When a block or a forwarding is made, a log it generated so the administrator can oversee the data that it’s being affected by the rules. usually, the firewall is located after the incoming data is processed by the router.
As I found out, the most common firewalls are Cisco ASA and Sophos. Overall I found people prefer Sophos firewalls. Basically, because Cisco ASA only works for people who can’t get out of the traditional enterprise comfort zone. This means that if you want to implement a not that usual functionality, ASA won’t be enough.
Cisco IDS vs SNORT discussion thread at CISCO support: Cisco IDS vs SNORT.
Firewalls discussion thread at Spiceworks: Sophos vs SonicWall vs Cisco ASA vs Fortinet.
I don’t know what to say about my new habit of making very late publication of our weekly reports (And any report in general).
This week the rest of the team primary focus was to film and deliver the final project video, and myself, I worked in the design of the project poster. The one that we needed to present at “The Engineering Expo”. I’m very proud of that poster, I think it ended up real nice
I’m proud of our project. I think we worked very well and accomplished the delivery of a nicely done (And well tested) product. I’m still amazed at how bad I’m at playing it. But the doubts about myself got at ease when I saw at the expo how most of the people who played were having difficulties playing, because it is indeed, a difficult project. I guess my teammates just practiced a whole lot more when designing the levels and testing them.
See you the next time!
I leave you my poster down below.
Please, only share.
A virtual private cloud is a cloud service that offers an infrastructure in which various services (VPC users), of the platform offering it, share resources available in this cloud while isolated from each other. This isolation is usually achieved through having a private local network and subnetting it (could be through VLANs), assigning a subnet to each user, or group of users that need to be directly connected, for other connections a local DNS server can be used.
VPC services usually also encrypt and mask the communication between its users and the shared resources through a VPN, adding as well a layer of authentication. A VPC implements layered security and provides it As-A-Service at the cost that it is highly complicated to set up, but using it correctly can yield a system with powerful defense.
This is a technology that I’ve yet to learn, but will do so, hopefully, this summer. If there are some project ideas that you, the reader, have that may help in my learning of this technology, I’ll appreciate it if you shared them in the comments.
In this post I’ll talk about containers, how they are used, and talk a little about their implication with security.
First, what is a container? A container is a lightweight packaging of a piece of software, including everything needed to execute it: code, runtime, system tools, system libraries, settings, etc.. A container is isolated, it will run the same every time, anywhere it’s executed. When run in a single machine, they share its operating system kernel, start instantly, and use less computing power and RAM.
Isn’t that a virtual machine?
A virtual machine consists of the following:
Abstraction of physical hardware.
Each VM consists of a full copy of the Guest OS, some apps and necessary binaries and libraries.
The hypervisor allows several VM’s to run on a single machine, turning one computer into many.
Usually in the GBs.
While a container is:
Abstraction of the application layer.
Contains code and its dependencies.
Multiple containers run on the same machine sharing the Host OS kernel with other containers.
Usually in the MBs.
So yeah, it’s virtual-machine-esque but not quite. By using a container, things like environment variables, that may contain sensible data, are not exposed to the main machine, instead they are cozily packaged along with the software and running inside the container, you can couple this with a reverse proxy like NGINX, setup SSL, and you’re all set for a slightly more secure application.
A technology that’s currently leading the market is Docker, providing a hub on which to upload your own images for the world to see and download common images from which to extend your own.
This post will deal with the topic or security practice of security by layers, and a little suggestion of a technology that may serve for this purpose in a not so deep-in-configuration manner.
In Information Security, security by layers refers to the practice of combining various security control points across the pipeline of an application. That is multiple mitigating security controls to protect the application’s resources and data. There are various ways of going about this layers, there is no silver bullet in security by layers, as every system is different, but some examples may be:
Consumer Layered Security Strategy
Extended validation (EV) SSL certificates.
Single sign-on (SSO).
Fraud detection and risk-based authentication.
Transaction signing and encryption.
Secure Web and e-mail.
Open fraud intelligence network.
Enterprise Layered Security Strategy
Workstation application whitelisting.
Workstation system restore solution.
Workstation and network authentication.
File, disk and removable media encryption.
Remote access authentication.
Network folder encryption.
Secure boundary and end-to-end messaging.
Content control and policy-based encryption.
These are the common can-be-found-in-any-page-you-check strategies, in the next blog I’ll cover another topic related, in some way, to security by layers, that is using containers to deploy code.
This last week I tested a lot from the users and pools, I found a lot of bugs while I was doing so and fixed them. I managed to solve the travis-ci problem with neo4j (turns out it was trying to connect to another port). So now when someone pushes, the tests are run.