How long was this last partial? Three weeks long? I hardly even felt it. I only remember two guest speakers, honestly. If there was another one I feel bad for not remembering. Mr. Escobedo gave us a good talk about finances, and the last person, the unit testing guy.
Something I would recommend Mr. Escobedo (though, I have done it myself, and now I see it as a little disrespectful) is laughing at the guy who asked a question about the current mexican government. It is okay to make a point by laughing, but sometimes when you are in a more serious environment (I mean, half of the class does not even turn their cameras on, and the other half never talks), you should not take for granted that your audience will respond as you hope it will.
As for the Unit Testing fella (can’t remember his name), he kept talking about how Unit Testing almost by itself could land us a job, but I wish he would have been a little more direct and make emphasis on the job part (since I believe is what most people are really into throughout their career) and then tell us about his topic. I liked his examples on it, though.
In the end, the fact that testing is not only in our lives as software engineers, but also in our daily routines and outside activities gives this class an edge and allows students to experiment a satisfaction that few other courses give them.
In my heart at least, I feel like marrying a cybersecurity master’s degree in the future. So when I heard Maggie was into that, the very mentioning of the word made me pay even more attention than one usually does to a guest speaker (yes, I am generalizing to not feel bad about my lack of attention at times). It was beautiful to hear you can learn about the topic without the need to have a bachelor’s in Computer Science. On the other hand: it was painful to hear I didn’t need to study a bachelor’s to learn it, and I never did so (hey, I was in PrepaTec, in Sonora! They only care about teaching leadership without further knowledge here). It got me excited about the idea of purchasing infosec books, now that I am picking up the habit of reading more than ever.
Another interesting topic which is always good to bring up, since our lives revolve around it, whether we want to admit it or not, is that of money. With this, I am talking about the talk we had with Ricardo Escobedo, and it is always good to remind students to care for their earnings and maybe use them up on something which will prove more fruitful.
The Virtual Private Cloud, a.k.a. EC2-VPC, one of the main components you are bound to use when working with AWS. The first configuration question when working with WorkSpaces, VDI solution, or RDS will be: what VPC do you want?
A VPC is a logically isolated “data center” where your computer instances and various AWS services reside. Any failure to secure a correct VPC is on Amazon, while failure to provide a secure design for the hosted application is on the client.
AWS’s VPCs do not always contain the services promised to the client within themselves. Exposure to such services generally start and end at Layer 3 level, meaning they come from Amazon themselves, and are given whenever they are asked for. This is due to the fact that thousands of customers use them and are running on a massive shared network infrastructure, and running them internally would not offer such a flexible design to them. This is why AWS does not make use of VLANs either, since such technologies cannot scale well. Nor does it use MLPS.
A very important and more often than not overlooked aspect of any cloud infrastructure, in this particular case, working with AWS, is the location of the services. AWS has an extensive decentralization and a tight grip on the location of its main data centers, allowing for a more secure and less compromised site for their physical layer. Nevertheless, the location for one’s use of AWS varies in both intensiveness and offered services.
Something often slightly criticized about AWS is its Service Level Agreement (SLA). Something expected about a cloud service is for it to be available 24/7, without a hitch. The problem, as with everything, more larger its extension around the world, and the stronger its products become, they tend to be more complex; the maintenance is something essential, yet very hard to offer full-time. Most services at Amazon are attached to an SLA, though some do not have it at all.
The AWS Customer Agreement is the foundation for the terms and conditions of the overall service one may receive. Its major considerations are:
Changes: There is bound to be, lots of them. With or without warning. This is the world of software engineering, after all.
Security & Privacy: Backup images shall be created in order to keep a safe copy of all data, in case of some failure.
Clients’ Responsibilities & Indemnification: Everything involving the end users is responsibility of Amazon’s client (you), including any illegal actions.
Quality of Service: Amazon is responsible for this, only if the client proves he has done everything in his power to maintain everything running correctly (OS kernel up-to-date, required network speeds, optimized EBS instances, etc.).
Availability: Not every service will be available, the customer must plan for this.
I am not quite fond of testing (who is, honestly?), but seeing how it is an essential part of the process (sometimes), it is worth taking a look into the thoughts and methodologies thriving in the industry regarding it. Chaos engineering and TCR bring up some interesting insight to the mix, but I think I would very much rather the Webster Tomskins (gosh, did I even spell it right?) approach: when you are starting a project, you have absolute, total control of everything, the design being a fundamental part of it.
I am constantly seeing forward, thinking about my future, in life, in school, work, family, you name it. It is a habit I have been building up for a long time: planning my approach carefully. I know it is not the right way of thinking, and I probably already deviate too much from the topic, but what I am trying to say is that I have not only been looking forward in this class, but my life in general. This last year has been life-changing for most of us, there is no denying it, and the last few months have opened my eyes through a few experiences. I am mindfully in constant change, and I have been improving so many aspects of my life little by little that I am constantly distracted. I hate this, because this class offers me so much (readings, important guest speakers, etc.) and I feel useless at times when I cannot digest this amount of content.
It is not by any means your fault, Mr. Bauer, I am just too slow on processing input. That is another thing I must improve, because if I was up to the challenge, I would love to be as quick-witted as some of my classmates Continue reading "First Partial Reflection"
Up until 2010, what we have come to know as the “Cloud” was not the standardized practice that we know of today. An enterprise wanting to keep their data digital and safe had to invest up to US$800 million to implement these types of infrastructures (servers, virtual machines, and other hardware/software). This is what Amazon Web Service, or AWS, does best; it is a term known as IaaS, or Infrastructure as a Service, and it can help reduce these overgrown budgets down to US$2 million by providing it to businesses whose main concern is not maintaining a Cloud. Today, the National Institute of Standards or NIST has set rules and best practices to keep a safe infrastructure across the USA and probably the world.
Some of the characteristics of AWS include:
On-demand self-service – Procuring a virtual private cloud takes seconds.
Broad network access – HTTPS endpoints and cloud service access from almost anywhere across the globe (using the Internet).
Resource pooling – As a necessary standard for all public cloud providers, resources are accurately distributed around the numerous Availability Zones (AZ).
Rapid elasticity – The bigger an application grows, the more power and services it may need, something which AWS is perfectly capable of providing with EC2 (Elastic Cloud Computing).
Measured Services – You are only charged for what you use, and it is all present and accounted for by the bit and by the second.
We all have goals in life, in our work, with our family or friends. So it is no surprise that the goal of software is to satisfy the needs of our client(s) in the swifest, most unnoticeable manner. When you ask: “what is software quality?”, you are in fact making reference to the ultimate step of building software. Does it work? More importantly, does it work in the same way it was agreed it would?
When talking about software quality, the ISO actually has eight characteristics to define it:
1. Maintainability: How easily the people working on the product can understand it and redesign it. 2. Portability: Ability of the product to move from one environment to another. 3. Functionality: Does the product reach the user expectations? 4. Performance: The scalability and speed at which a product can perform. 5. Compatibility: If the product is meant to work on certain environment and/or coexist with another product or device, it shall do so correctly. 6. Usability: Can clients understand and use it correctly? 7. Reliability: How likely is the product bound to fail, and if so, how quickly can it be back and running. 8. Security: The less vulnerable it is to attacks, the better.
A Focus on Process
There exists many paths, and many of them get to Rome, but a very common methodology for is the Software Development Life Cycle, a systematic way of building software. It has seven phases:
Requirement collection and analysis: What is needed, and how can we reach that state?
Feasibility study: Can it be done accordingly (both technically and legally)?
Have you ever been in the situation that you finally finish a feature in a project and then the next day the project owner told you that it was not what he wanted and you have to make a lot of changes or even create again that feature from zero? well you can prevent those scenarios using verification and validation of sotware.
V&V is the process of checking that the software meets all the specifications given, it fulfills the purpose of the client, making sure that the software fulfill the laws in the countries that is going to be available, it also helps to reduce the number of bugs at the moment of deploy.
So for the part of software verification you can use some tools for making test cases and see which outpút it should give to you with that input and reviewing all the logical part of the software to answer the question are we building the product right? and for the part of validation the ideal case scenario is that the client is available all the time of the development and ask continuous questions about if what the software is doing is what it wants, but this is unlikely to happen so you can use some strategies defined already to follow some steps to ensure a good communication with the owner of the product and the development team to fulfill all the requirements for example the waterfall development process or the iterative development processes that are more use nowadays because the constant change in the world of technology.
Models and standards for software at first look like something strange and unnecessary when you are working on your own, but these standards are helpful to evaluate the software quality and in this way you can see what areas of improvement you can work on and how to optimise your current project to make a better product.
There are some standards for ensure quality in software some of them are the following:
Capability Maturity Model Integration (CMMI):
CMMI was developed at Carnegie Mellon University. It is required by many U.S Government contracts, just by that you can see how important are these models to ensure that software being developed by certain company have all the requirements of quality needed, this model focus in the part of product and service development, service establishment, management and product and service acquisition, this is important as well because thanks to this best practices that this model propose is easier to maintain the software.
The Personal Software Process (PSP) provides a operational framework with the objective of helping a person in how they manage their own time, productivity. This help teams of managers and engineers to organize small projects or large projects. PSP has as objective to improve the levels of quality and productivity of the team developing those projects. In this framework or model each team manage itself, they do their own plans, how they track their work and are responsible of the quality of their work, but before someone can take part on a team is necessary to learn about TSP(Team Software process) because in each team there are roles to ensure the quality and organization of the team.
Software Process Improvement Capability Determination (ISO/IEC 15504) has the following objectives: