Last week was the 30th anniversary of the World Wide Web! This is the same year that the Berlin wall came down and sadly the last time that Liverpool became premier league champions. (yes it’s a long time ago).
So thanks to Tim Berners-Lee, inventor of the WWW, now you can be reading this post from almost any part of the world (not the building 2 from Tecnológico de Monterrey, Campus GDL internet is really bad in there). Here is a quote of him that describe his vision of the internet
Suppose all the information stored on computers everywhere were linked. Suppose I could program my computer to create a space in which everything could be linked to everything.
Internet has changed our life in sooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo many ways. And I’ll show some of them thanks to a very cool site that took screenshots of old web pages, this is WayBackMachine.
Its awesome to see how internet keeps changing. In 1994 (just 5 years after the WWW was created) Pizza hut was already doing online pizza delivery. The site (image below) looks like a site that didn’t found the css file. But still it’s awesome to see how we integrate the net into our daily life.
Simple Smalltalk Testing
Changing the topic a little lot, I’ll speak about basic small talk. This is based on this article written by Ken Beck (a guru of testing).
If you want to see my thoughts about the article directly you can download the hypothesis chrome extension.
PyCharm and PyUnit
If you have Linkedin Learning (before Lynda) a good course you can follow is this. I couldn’t find any place to leave
Welcome back to a series of blogposts about how to set up a little server in a Linux Virtual Machine, in this post we will lean about Github and SSH
if you are not familiar to the topic you can go to the first or second part of the series
Ensure that you have your GitHub account.
Before you start you should have a Github account.
Ensure that you have a repository created for testing.
If you followed the last part we had a web server created in node, we will use this. This will be our root.
Setup your GitHub two-factor authentication.
This part is a step forward process and Github explain it 100 times better than me, but I’ll explain it anyways in case you don’t want to move to another site. It’s really nothing from the other world, is more just following a series of steps:
Go tho the git setting and click in the security tab
Click the enable two factor button
Follow the steps in the site
DONT FORGET TO SAVE YOUR RECOVERY CODES
They send you a mai anyways
You are ready
Github SSH keys Setup
This is a little bit harder than the last step, the Github team explain it as well (though some commands don’t work the same for ubuntu)
This week topic was “Should it exist?”. New technologies can be very useful to improve or optimize processes, but in many cases they are not necessarily the solution. I think that nowadays entrepreneurs in their desire to be innovative and to be at the forefront try to use new tools that they listened to without even analyzing if their product/startup really needs them.
When people find out that you are an engineer in systems they can not avoid saying the classics “Oh I have a good idea that will definitely change the world, it’s an app that …”, “I want to make an Uber type app that …”.
And it’s like no manches. Not everything is solved through an app. And well, do not tell the fact of people who come to request “something” with the magic recipe: “I want a software that makes blah blah blah, that uses machine learning, blockchain and add a pinch of IoT.” It is true that it is very useful that things are connected, that our information is updated and stored without effort, but you have to think, do I really need an intelligent napkin?
I really like the design principle KISS (Keep It Simple, Stupid); which states that most systems work better if they remain simple than if they become complex. I really believe that simplicity should be maintained as a key design objective, and any unnecessary complexity should be avoided. Therefore, it is good to innovate by becoming something “smart” if it will make a process easier, will save time or money and in general it will make your life easier; otherwise, no thanks, definitely should not exist.
What two properties must be satisfied for an input domain to be properly partitioned?
The partition must cover the entire domain (completeness)
The blocks must not overlap (disjoint)
What is an Input Domain Model (IDM)?
An input domain model (IDM) represents the input space of the system under test in an abstract way.
A test engineer describes the structure of the input domain in terms of input characteristics. The test engineer creates a partition for each characteristic. The partition is a set of blocks, each of which contains a set of values. From the perspective of that particular characteristic, all values in each block are considered equivalent.
What gives more tests, each choice coverage or pair-wise coverage?
An each case coverage requires one value from each block for each characteristic. A pair-wise coverage a value from each block for each characteristic to be combined with a value from each block for each other characteristic.
Welcome to a series of blogposts about how to set up a little server in a Linux Virtual Machine, if you are not familiar to the topic you can a little more about in here (which is the first part of the series)
Install a Linux distribution
For this task, I chose the Ubuntu 18.04.1 LTS which runs in Virtual Box. This is the same I use for other courses (maybe this is not a good idea). If you would like to download the same Linux distribution, you can install this “old” mini iso from ubuntu.
Other Linux distributions (thanks to @ken_bauer for the links) :
The weekly topic was Data as a commons for Smart City. When I think about a common, normally I think in a good common and as something that is for the benefit or interests of all. There are a lot of applications that daily retrieves a lot of data of us. We easily allow to share our location, our searches, sometimes even our contacts or more personal information. We do this because we just want to disappear the alert that came out in the application, maybe we click and accept as a default action, we are too lazy to read or maybe we do not take much importance to what we are sharing and how important, private it is.
We are the resources that fill database with data. There are many companies that depend on our collaboration and we do not even know it. How interesting is that there are platforms that share some of the information collected with us, that make us feel part of something and transparently ask us for help, ask for our data and we openly do it, in a consensual way because we know that we benefit from other people that also decide to do it.
I just read about a report made by people of DECODE project where they say that data should be the fundamental public infrastructure of the 21st century, as were roads, street lights and clean drinking water in the past. They want city governments to start reconceiving data as a new type of common good, because by helping citizens regain control of their data, it is possible to generate public value rather than private profit.
In fact, the goal of DECODE project is to essentially create data commons from data produced by people, their devices and sensors; a shared resource Continue reading "Data for all"→
Smart surveillance, is the use of automatic video analysis technologies in video surveillance applications. This week we had to watch some videos about surveillance aspects of Smart Cities. All the mentioned smart surveillance projects have many different applications and great potential but have significant implications regarding security and privacy.
Talking about security implications, the ability to provide real time alerts, capture high value video and provide sophisticated analysis clearly has the potential to enhance security in various public and private facilities. This systems are intended to assist security guards, and will be measured on their ability to improve vigilance and to reduce labor and storage costs. However, the value of the technology is yet to be proven in the field but as more smart surveillance systems get deployed those systems must be analyzed for their effectiveness in detecting important activity events, while generating few false positives (alarms).
About privacy implications, this kind of systems have the ability to monitor video at a level which is a human cannot. This provides the monitoring agencies with a significantly enhanced level of information about the people in the space leading to higher concerns about individual privacy and abuse of sensitive individual information. However, the same smart surveillance technologies are providing novel ways of enhancing privacy in video based systems which was hitherto not possible.
It is difficult to imagine a future where the surveillance of the space is completely automatic, there is clearly an urgent need to improve the existing surveillance technologies with better tools to aid efficacy of the human operators involved in that field. With the increasing availability of the not that expensive computing, video infrastructure and better video analysis technologies, smart surveillance systems will completely replace the existing ones and the degree of smartness will vary with the level of Continue reading "They listen to me, they hear me, they watch me… ♪♫♪♪♫ tikitikiti"→
DevOps is a hot trend lately, but it’s not a new thing. This term is becoming real popular in recent years, (just like other terms as machine learning and agile methodologies),
This chart is tricky since there weren’t that many Google internet users at 2004 to 2009 but still you can see the big raise of search numbers in recent years
So… what is DevOps exactly? According to the Agile Admin. This term is all about agile operations and the value of collaboration between development and the operations staff throughout all stages of the development lifecycle when creating and operating a service.
Developers + Operations = ??? = profit
In other words, DevOps is the cooperation between the development and operations team and the people involved in the project to have a satisfactory delivery. It encapsulates the continuous delivery, automate deployment,designing the operability and monitoring.
The DevOps and the Agile methodologies are very tied together, nevertheless, DevOps and Agile are not the same thing.
In my experience, the DevOps team is the team everyone blames when a push is made into the development or any other phase and it doesn’t work properly. They are the sysadmins or masters of /Jenkins/Travis/etc… but this is not exactly the truth. DevOps is the answer to the fast pace of the modern world technology.
Like all the popular stuff, some people won’t like it (either having a reason or not). DevOps is popular so… some people say it’s the same thing sysadmins been doing forever just Continue reading "What is DevOps?"→
I am coursing informatic security class and this week I have a presentation about cyber vulnerabilities in virtual and augmented reality, this was a very interesting topic for me because I feel like I do not know anything about it so I had the opportunity / obligation to do some research and I decided to talk about something related (not vulnerabilities, just AR).
A named that called my attention was Hololens. According to CNN, last Sunday Microsoft announced them in Barcelona at the Mobile World Congress, an annual event for the mobile industry.
They show various possible uses in the workplace in his demo presentation. For example, they presented a real time virtual conference for a toy company, the manufacture of automobiles, the repair of industrial equipment and the performance of medical procedures that were aided by augmented reality technology.
Users who use a HoloLens viewer see the world around them, but with overlapping virtual graphics. The images are often integrated with real objects and surfaces, for example you may see a cup of virtual coffee that seems to perch on a real table (because of this, the company insists on calling it “mixed reality”). The device has eye tracking and other sensors, along with AI additions and facilitate the manipulation of virtual objects. It is also adding more integration in the cloud.
Since its first device, launched in 2016, the company was leaning toward the uses of technology in the workplace, offering a demonstration of an application that allows NASA researchers to see the surface of Mars in their offices. Since then, the company has found several corporate clients and uses for the HoloLens. Automakers, for example, have used the device to help maximize their production processes.
All agile methods have an underlying assumption that instead of defining all behaviours with requirements or specifications, we demonstrate some behaviours with specific tests. The software is considered correct if it passes all particular set of tests. But to be honest, no one is sure of what the term correctness mean when applied to a computer program.
Do TDD tests do a good job testing the software?
Test Driven development is an agile approach (agile is a mindset not a methodology). So it’s a good tool to be responsive to change, because its focus is create a system that does something as early as possible. TDD allow us to obtain critical feedback quickly as possible. For example today at work something in the backend crashed, but it’s better that if it’s going to fail, that fail as quickly as possible.
Can we automate our tests without TDD?
Can we use TDD without automating our tests?
What four structures do we use for test criteria?
What usually prevents our tests from achieving 100% coverage?
Some organizations in industry who adopt TDD report that it succeeds very well, and others report that it fails. Based on your knowledge of TDD and any experience you have, why do you think it succeeds sometimes but not all?
A few software organizations use test criteria and report great success. However, most organizations do not currently use test criteria. Based on your knowledge and experience, why do you think test criteria are not used more?
A quote that I like a loot from the book (references at the bottom):
In traditional software development, system requirement are often questionable in terms of how complete and