Week 12 – Post Mortem

--Originally published at TC3045 – Sagnelli's blog

This week we had our meeting with our counselor on the project, and we had an excellent feedback from him. I guess our project is on the final stretch, and we must focus on some minor improvements, and on the final QA part.

For this section of our project, I guess we are focusing in the Front End part of the project, and continuing our testing for fixing minor bugs.

Stay tuned for further developments on the project.

Week 12 – Pre Mortem

--Originally published at TC3045 – Sagnelli's blog

Coming back from holy week holidays, and I really need a second week off. However, it is time to return to our duties, and continue with this development. We are only a month away from finishing this course.

Last thing we were focusing on was the front end part of the app. This is the second phase of the project so to speak. We will be focusing on applying the sentiment analysis to show charts on the frontend, also to apply the backend to build the graphic word cloud, and give proper information, and direction on which candidate is leading the poll.

Stay tuned!

Week 11 – Pre Mortem

--Originally published at TC3045 – Sagnelli's blog

Greetings,

This is the week where the sprint presentation is going to be upheld. On Tuesday will demonstrate our advancements on the functionality of our project so far. Next steps of the project are to be determined, but are mostly related to front-end integration with what we already have.

Stay tuned for the results of this week.

Week 10 Post-Mortem

--Originally published at TC3045 – Sagnelli's blog

Greetings,

This week I saw some great advancements on our project. Alfonso, and I finished styling the code for the words counting functionality of the word cloud. The graphic stuff is going to be developed in the front-end; however, the basic functionality of distinguishing which words are most frequently used by candidates is already done. We used a library from Python named Counter to do this. The current output of our word cloud program is a list of words with their respective number of appearances.

This is what we are going to present on the sprint presentation on Tuesday because on Friday the classrooms where occupied, so the presentation was moved.

Be sure to hear of what the next steps are going to be.

Week 8 – Pre Mortem

--Originally published at TC3045 – Sagnelli's blog

Howdy partners,

It’s me again with another update on what we will be working this week. Last week, I worked on mining all tweets from an account on Twitter. This week, I will be polishing what I did last week, and I will be working with Alfonso on eliminating the stop words of a tweet. Stop words are empty words that are filtered out from natural language processing, such as: the, is, at, which, and on.

By doing this, we will be able to create a more accurate words map without having a lot of concurrences of stop words and focusing just on the information that matters.

Stay tuned for the advancement of this week on the post mortem blog post.

Week 6: Pre-Mortem

--Originally published at TC3045 – Sagnelli's blog

This week is all about joining forces. So many thing have beeen worked on separately, and its time to integrate everything into a first impression of our project. We can now focus on gathering and mining information to be ready for the next phase. The next phase is actually analyzing data, and displaying results of our application. However, that’s a topic for another day.

In the meantime, let’s wait to see how things are developing within our project. Stay tuned for this week’s post-mortem to know what’s going on.

Cheers!

Week 4: Post-Mortem

--Originally published at TC3045 – Sagnelli's blog

Hey! Good to see you again.

This week I added environment variables for the connection information on the database. Also I changed some classes names, and improved the code for better understanding. Connection to the database is working, as well as insertion. However, we still have problems with encoding the text to UTF-8, and avoiding weird characters. EMOJIS are a big problem because, whenever a user has an EMOJI in his user name, or in his tweet, the program halts.

We are still figuring a way to store the text with the emojis for further sentiment analysis. I know it should be possible.

Stay tuned.

Week 4: Database & Python 4 life

--Originally published at TC3045 – Sagnelli's blog

What’s new, it’s Mike again.

This week is going to be about improving what I previously achieved on the project. I will fix the code, applying some better practices while coding, and I will add some functionalities. Reading, and deleting from the database shall be implemented. Also, I will try to store the tweets without modifying the text for further use on sentiment analysis. For this last part, everything matters; from emojis, to raw text. Henceforth, I will be researching how to store tweets with special characters, and emojis into the database.

This is what I will try to do, and I will keep learning about Python, MySQL, and all these new topics for me.

Stay tuned for what I achieve this week.

From now on, bad practices shall be avoided.

Post-Mortem: Tweets buffer

--Originally published at TC3045 – Sagnelli's blog

Sup, it’s me again.

What I did this week was not so different from what I had planned. However, I did not fulfilled what I was thinking about. I did something a little bit different, but it was a nice progress towards the final project.

I was able to add tweets’ raw text to a list in Python, and when a specified number of tweets are in the list, those tweets are stored in a database.

I was able to do so by modifying the streamer.py, and adding the tweets to a list there. Afterwards, by calling the function in the micro-service storage.insert-tweet.py to do the insertion of whatever the number of tweets is specified.

I had trouble when the text had break lines, unknown characters, or Emojis. Henceforth, I had to modify the text a little to be able to store them in the database.

Up next is adding  some environment variables for the database connection, and adding some update, and delete functions to the application.

Week 3: PyMySQL is about to be applied

--Originally published at TC3045 – Sagnelli's blog

“When you want to succeed as bad as you want to breathe, then you’ll be successful.”

– Eric Thomas

Hey, I’m still awake. Now, there’s a reason to it, and as bad as I want that reason to be watching a Netflix series, or playing videogames, it is not. I am thinking on what I will be doing this week for the Elections Analyzer 2018 project.

This week is going to be all about creating generic functions in Python for DML queries on a MySQL database. I will be dividing this functions as micro-services. Henceforth, there is going to be a separate micro-service for inserting (creating), selecting (reading), updating, and deleting, (CRUD), from a generic database. Everything using the PyMySQL library.

Stay tuned for the post-mortem update of this week’s development.